title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Introduction to the VM Portal | Introduction to the VM Portal Red Hat Virtualization 4.3 Accessing and using the VM Portal Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This document shows you how to use the Red Hat Virtualization VM Portal. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/introduction_to_the_vm_portal/index |
Chapter 5. Uninstalling OpenShift Data Foundation | Chapter 5. Uninstalling OpenShift Data Foundation 5.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_google_cloud/uninstalling_openshift_data_foundation |
Chapter 6. Using NVDIMM persistent memory storage | Chapter 6. Using NVDIMM persistent memory storage You can enable and manage various types of storage on Non-Volatile Dual In-line Memory Modules (NVDIMM) devices connected to your system. For installing Red Hat Enterprise Linux 8 on NVDIMM storage, see Installing to an NVDIMM device instead. 6.1. The NVDIMM persistent memory technology Non-Volatile Dual In-line Memory Modules (NVDIMM) persistent memory, also called storage class memory or pmem , is a combination of memory and storage. NVDIMM combines the durability of storage with the low access latency and the high bandwidth of dynamic RAM (DRAM). The following are the other advantages of using NVDIMM: NVDIMM storage is byte-addressable, which means it can be accessed by using the CPU load and store instructions. In addition to the read() and write() system calls, which are required for accessing traditional block-based storage, NVDIMM also supports direct load and a store programming model. The performance characteristics of NVDIMM are similar to DRAM with very low access latency, typically in the tens to hundreds of nanoseconds. Data stored on NVDIMM is preserved when the power is off, similar to a persistent memory. With the direct access (DAX) technology, applications to memory map storage directly are possible without going through the system page cache. This frees up DRAM for other purposes. NVDIMM is beneficial in use cases such as: Databases The reduced storage access latency on NVDIMM improves database performance. Rapid restart Rapid restart is also called the warm cache effect. For example, a file server has none of the file contents in memory after starting. As clients connect and read or write data, that data is cached in the page cache. Eventually, the cache contains mostly hot data. After a reboot, the system must start the process again on traditional storage. With NVDIMM, it is possible for an application to keep the warm cache across reboots if the application is designed properly. In this example, there would be no page cache involved: the application would cache data directly in the persistent memory. Fast write-cache File servers often do not acknowledge a client write request until the data is on durable media. Using NVDIMM as a fast write-cache, enables a file server to acknowledge the write request quickly, and results in low latency. 6.2. NVDIMM interleaving and regions Non-Volatile Dual In-line Memory Modules (NVDIMM) devices support grouping into interleaved regions. NVDIMM devices can be grouped into interleave sets in the same way as regular dynamic RAM (DRAM). An interleave set is similar to a RAID 0 level (stripe) configuration across multiple DIMMs. An Interleave set is also called a region. Interleaving has the following advantages: NVDIMM devices benefit from increased performance when they are configured into interleave sets. Interleaving can combine multiple smaller NVDIMM devices into a larger logical device. NVDIMM interleave sets are configured in the system BIOS or UEFI firmware. Red Hat Enterprise Linux creates one region device for each interleave set. 6.3. NVDIMM namespaces Non-Volatile Dual In-line Memory Modules (NVDIMM) regions can be divided into one or more namespaces depending on the size of the label area. Using namespaces, you can access the device using different methods, based on the access modes of the namespace such as sector , fsdax , devdax , and raw . For more information, NVDIMM access modes . Some NVDIMM devices do not support multiple namespaces on a region: If your NVDIMM device supports labels, you can subdivide the region into namespaces. If your NVDIMM device does not support labels, the region can only contain a single namespace. In that case, Red Hat Enterprise Linux creates a default namespace that covers the entire region. 6.4. NVDIMM access modes You can configure Non-Volatile Dual In-line Memory Modules (NVDIMM) namespaces to use either of the following modes: sector Presents the storage as a fast block device. This mode is useful for legacy applications that are not modified to use NVDIMM storage, or for applications that use the full I/O stack, including Device Mapper. A sector device can be used in the same way as any other block device on the system. You can create partitions or file systems on it, configure it as part of a software RAID set, or use it as the cache device for dm-cache . Devices in this mode are available as /dev/pmem N s . After creating the namespace, see the listed blockdev value. devdax , or device direct access (DAX) With devdax , NVDIMM devices support direct access programming as described in the Storage Networking Industry Association (SNIA) Non-Volatile Memory (NVM) Programming Model specification. In this mode, I/O bypasses the storage stack of the kernel. Therefore, no Device Mapper drivers can be used. Device DAX provides raw access to NVDIMM storage by using a DAX character device node. Data on a devdax device can be made durable using CPU cache flushing and fencing instructions. Certain databases and virtual machine hypervisors might benefit from this mode. File systems cannot be created on devdax devices. Devices in this mode are available as /dev/dax N . M . After creating the namespace, see the listed chardev value. fsdax , or file system direct access (DAX) With fsdax , NVDIMM devices support direct access programming as described in the Storage Networking Industry Association (SNIA) Non-Volatile Memory (NVM) Programming Model specification. In this mode, I/O bypasses the storage stack of the kernel, and many Device Mapper drivers therefore cannot be used. You can create file systems on file system DAX devices. Devices in this mode are available as /dev/pmem N . After creating the namespace, see the listed blockdev value. Important The file system DAX technology is provided only as a Technology Preview, and is not supported by Red Hat. raw Presents a memory disk that does not support DAX. In this mode, namespaces have several limitations and should not be used. Devices in this mode are available as /dev/pmem N . After creating the namespace, see the listed blockdev value. 6.5. Installing ndctl You can install the ndctl utility to configure and monitor Non-Volatile Dual In-line Memory Modules (NVDIMM) devices. Procedure Install the ndctl utility: 6.6. Creating a sector namespace on an NVDIMM to act as a block device You can configure a Non-Volatile Dual In-line Memory Modules (NVDIMM) device in sector mode, also called legacy mode, to support traditional, block-based storage. You can either: reconfigure an existing namespace to sector mode, or create a new sector namespace if there is available space. Prerequisites An NVDIMM device is attached to your system. 6.6.1. Reconfiguring an existing NVDIMM namespace to sector mode You can reconfigure an Non-Volatile Dual In-line Memory Modules (NVDIMM) namespace to sector mode for using it as a fast block device. Warning Reconfiguring a namespace deletes previously stored data on the namespace. Prerequisites The ndctl utility is installed. For more information, see Installing ndctl . Procedure View the existing namespaces: Reconfigure the selected namespace to the sector mode: Example 6.1. Reconfiguring namespace1.0 in sector mode The reconfigured namespace is now available under the /dev directory as the /dev/pmem1s file. Verification Verify if the existing namespace on your system is reconfigured: Additional resources ndctl-create-namespace(1) man page on your system 6.6.2. Creating a new NVDIMM namespace in sector mode You can create a Non-Volatile Dual In-line Memory Modules (NVDIMM) namespace in sector mode for using it as a fast block device if there is available space in the region. Prerequisites The ndctl utility is installed. For more information, see Installing ndctl . The NVDIMM device supports labels to create multiple namespaces in a region. You can check this using the following command: This indicates that it read the label of one NVDIMM device. If the value is 0 , it implies that your device does not support labels. Procedure List the pmem regions on your system that have available space. In the following example, space is available in the region1 and region0 regions: Allocate one or more namespaces on any of the available regions: Example 6.2. Creating a 36-GiB sector namespace on region0 The new namespace is now available as /dev/pmem0.1s . Verification Verify if the new namespace is created in the sector mode: Additional resources ndctl-create-namespace(1) man page on your system 6.7. Creating a device DAX namespace on an NVDIMM Configure the NVDIMM device that is attached to your system, in device DAX mode to support character storage with direct access capabilities. Consider the following options: Reconfiguring an existing namespace to device DAX mode. Creating a new device DAX namespace, if there is space available. 6.7.1. NVDIMM in device direct access mode Device direct access (device DAX, devdax ) provides a means for applications to directly access storage, without the involvement of a file system. The benefit of device DAX is that it provides a guaranteed fault granularity, which can be configured using the --align option of the ndctl utility. For the Intel 64 and AMD64 architecture, the following fault granularities are supported: 4 KiB 2 MiB 1 GiB Device DAX nodes support only the following system calls: open() close() mmap() You can view the supported alignments of your NVDIMM device using the ndctl list --human --capabilities command. For example, to view it for the region0 device, use the ndctl list --human --capabilities -r region0 command. Note The read() and write() system calls are not supported because the device DAX use case is tied to the SNIA Non-Volatile Memory Programming Model. 6.7.2. Reconfiguring an existing NVDIMM namespace to device DAX mode You can reconfigure an existing Non-Volatile Dual In-line Memory Modules (NVDIMM) namespace to device DAX mode. Warning Reconfiguring a namespace deletes previously stored data on the namespace. Prerequisites The ndctl utility is installed. For more information, see Installing ndctl . Procedure List all namespaces on your system: Reconfigure any namespace: Example 6.3. Reconfiguring a namespace as device DAX The following command reconfigures namespace0.1 for data storage that supports DAX. It is aligned to a 2-MiB fault granularity to ensure that the operating system faults in 2-MiB pages at a time: The namespace is now available at the /dev/dax0.1 path. Verification Verify if the existing namespaces on your system is reconfigured: Additional resources ndctl-create-namespace(1) man page on your system 6.7.3. Creating a new NVDIMM namespace in device DAX mode You can create a new device DAX namespace on an Non-Volatile Dual In-line Memory Modules (NVDIMM) device if there is available space in the region. Prerequisites The ndctl utility is installed. For more information, see Installing ndctl . The NVDIMM device supports labels to create multiple namespaces in a region. You can check this using the following command: This indicates that it read the label of one NVDIMM device. If the value is 0 , it implies that your device does not support labels. Procedure List the pmem regions on your system that have available space. In the following example, space is available in the region1 and region0 regions: Allocate one or more namespaces on any of the available regions: Example 6.4. Creating a namespace on a region The following command creates a 36-GiB device DAX namespace on region0. It is aligned to a 2-MiB fault granularity to ensure that the operating system faults in 2-MiB pages at a time: The namespace is now available as /dev/dax0.2 . Verification Verify if the new namespace is created in device DAX mode: Additional resources ndctl-create-namespace(1) man page on your system 6.8. Creating a file system DAX namespace on an NVDIMM Configure an NVDIMM device that is attached to your system, in file system DAX mode to support a file system with direct access capabilities. Consider the following options: Reconfiguring an existing namespace to file system DAX mode. Creating a new file system DAX namespace if there is available space. Important The file system DAX technology is provided only as a Technology Preview, and is not supported by Red Hat. 6.8.1. NVDIMM in file system direct access mode When an NVDIMM device is configured in file system direct access (file system DAX, fsdax ) mode, you can create a file system on top of it. Any application that performs an mmap() operation on a file on this file system gets direct access to its storage. This enables the direct access programming model on NVDIMM. The following new -o dax options are now available, and direct access behavior can be controlled through a file attribute if required: -o dax=inode This is the default option when you do not specify any dax option while mounting a file system. Using this option, you can set an attribute flag on files to control if the dax mode can be activated. If required, you can set this flag on individual files. You can also set this flag on a directory and any files in that directory will be created with the same flag. You can set this attribute flag by using the xfs_io -c 'chattr +x' directory-name command. -o dax=never With this option, the dax mode will not be enabled even if the dax flag is set to an inode mode. This means that the per-inode dax attribute flag is ignored, and files set with this flag will never be direct-access enabled. -o dax=always This option is equivalent to the old -o dax behavior. With this option, you can activate direct access mode for any file on the file system, regardless of the dax attribute flag. Warning In further releases, -o dax might not be supported and if required, you can use -o dax=always instead. In this mode, every file might be in the direct-access mode. Per-page metadata allocation This mode requires allocating per-page metadata in the system DRAM or on the NVDIMM device itself. The overhead of this data structure is 64 bytes per each 4-KiB page: On small devices, the amount of overhead is small enough to fit in DRAM with no problems. For example, a 16-GiB namespace only requires 256 MiB for page structures. Since NVDIMM devices are usually small and expensive, storing the page tracking data structures in DRAM is preferable. On NVDIMM devices that are be terabytes in size or larger, the amount of memory required to store the page tracking data structures might exceed the amount of DRAM in the system. One TiB of NVDIMM requires 16 GiB for page structures. As a result, storing the data structures on the NVDIMM itself is preferable in such cases. You can configure where per-page metadata are stored using the --map option when configuring a namespace: To allocate in the system RAM, use --map=mem . To allocate on the NVDIMM, use --map=dev . 6.8.2. Reconfiguring an existing NVDIMM namespace to file system DAX mode You can reconfigure an existing Non-Volatile Dual In-line Memory Modules (NVDIMM) namespace to file system DAX mode. Warning Reconfiguring a namespace deletes previously stored data on the namespace. Prerequisites The ndctl utility is installed. For more information, see Installing ndctl . Procedure List all namespaces on your system: Reconfigure any namespace: Example 6.5. Reconfiguring a namespace as file system DAX To use namespace0.0 for a file system that supports DAX, use the following command: The namespace is now available at the /dev/pmem0 path. Verification Verify if the existing namespaces on your system is reconfigured: Additional resources ndctl-create-namespace(1) man page on your system 6.8.3. Creating a new NVDIMM namespace in file system DAX mode You can create a new file system DAX namespace on an Non-Volatile Dual In-line Memory Modules (NVDIMM) device if there is available space in the region. Prerequisites The ndctl utility is installed. For more information, see Installing ndctl . The NVDIMM device supports labels to create multiple namespaces in a region. You can check this using the following command: This indicates that it read the label of one NVDIMM device. If the value is 0 , it implies that your device does not support labels. Procedure List the pmem regions on your system that have available space. In the following example, space is available in the region1 and region0 regions: Allocate one or more namespaces on any of the available regions: Example 6.6. Creating a namespace on a region The following command creates a 36-GiB file system DAX namespace on region0 : The namespace is now available as /dev/pmem0.3 . Verification Verify if the new namespace is created in file system DAX mode: Additional resources ndctl-create-namespace(1) man page on your system 6.8.4. Creating a file system on a file system DAX device You can create a file system on a file system DAX device and mount the file system. After creating a file system, application can use persistent memory and create files in the mount-point directory, open the files, and use the mmap operation to map the files for direct access. On Red Hat Enterprise Linux 8, both the XFS and ext4 file system can be created on NVDIMM as a Technology Preview. Procedure Optional: Create a partition on the file system DAX device. For more information, see Creating a partition with parted . Note When creating partitions on an fsdax device, partitions must be aligned on page boundaries. On the Intel 64 and AMD64 architecture, at least 4 KiB alignment is required for the start and end of the partition. 2 MiB is the preferred alignment. By default, the parted tool aligns partitions on 1 MiB boundaries. For the first partition, specify 2 MiB as the start of the partition. If the size of the partition is a multiple of 2 MiB, all other partitions are also aligned. Create an XFS or ext4 file system on the partition or the NVDIMM device: Note The dax-capable and reflinked files can now co-exist on the file system. However, for an individual file, dax and reflink are mutually exclusive. For XFS, disable shared copy-on-write data extents because they are incompatible with the dax mount option. Also, in order to increase the likelihood of large page mappings, set the stripe unit and stripe width. Mount the file system: There is no need to mount a file system with the dax option to enable direct access mode. When you do not specify any dax option while mounting, the file system is in the dax=inode mode. Set the dax option on the file before direct access mode is activated. Additional resources mkfs.xfs(8) man page on your system NVDIMM in file system direct access mode 6.9. Monitoring NVDIMM health using S.M.A.R.T. Some Non-Volatile Dual In-line Memory Modules (NVDIMM) devices support Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T.) interfaces for retrieving health information. Important Monitor NVDIMM health regularly to prevent data loss. If S.M.A.R.T. reports problems with the health status of an NVDIMM device, replace it as described in Detecting and replacing a broken NVDIMM device . Prerequisites Optional: On some systems, upload the acpi_ipmi driver to retrieve health information using the following command: Procedure Access the health information: Additional resources ndctl-list(1) man page on your system 6.10. Detecting and replacing a broken NVDIMM device If you find error messages related to Non-Volatile Dual In-line Memory Modules (NVDIMM) reported in your system log or by S.M.A.R.T., it might mean an NVDIMM device is failing. In that case, it is necessary to: Detect which NVDIMM device is failing Back up data stored on it Physically replace the device Procedure Detect the broken device: Find the phys_id attribute of the broken NVDIMM: From the example, you know that nmem0 is the broken NVDIMM. Therefore, find the phys_id attribute of nmem0 . Example 6.7. The phys_id attributes of NVDIMMs In the following example, the phys_id is 0x10 : Find the memory slot of the broken NVDIMM: In the output, find the entry where the Handle identifier matches the phys_id attribute of the broken NVDIMM. The Locator field lists the memory slot used by the broken NVDIMM. Example 6.8. NVDIMM Memory Slot Listing In the following example, the nmem0 device matches the 0x0010 identifier and uses the DIMM-XXX-YYYY memory slot: Back up all data in the namespaces on the NVDIMM. If you do not back up the data before replacing the NVDIMM, the data will be lost when you remove the NVDIMM from your system. Warning In some cases, such as when the NVDIMM is completely broken, the backup might fail. To prevent this, regularly monitor your NVDIMM devices using S.M.A.R.T. as described in Monitoring NVDIMM health using S.M.A.R.T. and replace failing NVDIMMs before they break. List the namespaces on the NVDIMM: Example 6.9. NVDIMM namespaces listing In the following example, the nmem0 device contains the namespace0.0 and namespace0.2 namespaces, which you need to back up: Replace the broken NVDIMM physically. Additional resources ndctl-list(1) and dmidecode(8) man pages on your system | [
"yum install ndctl",
"ndctl list --namespaces --idle [ { \"dev\":\"namespace1.0\", \"mode\":\"raw\", \"size\":34359738368, \"state\":\"disabled\", \"numa_node\":1 }, { \"dev\":\"namespace0.0\", \"mode\":\"raw\", \"size\":34359738368, \"state\":\"disabled\", \"numa_node\":0 } ]",
"ndctl create-namespace --force --reconfig= namespace-ID --mode=sector",
"ndctl create-namespace --force --reconfig=namespace1.0 --mode=sector { \"dev\":\"namespace1.0\", \"mode\":\"sector\", \"size\":\"755.26 GiB (810.95 GB)\", \"uuid\":\"2509949d-1dc4-4ee0-925a-4542b28aa616\", \"sector_size\":4096, \"blockdev\":\"pmem1s\" }",
"ndctl list --namespace namespace1.0 [ { \"dev\":\"namespace1.0\", \"mode\":\"sector\", \"size\":810954706944, \"uuid\":\"2509949d-1dc4-4ee0-925a-4542b28aa616\", \"sector_size\":4096, \"blockdev\":\"pmem1s\" } ]",
"ndctl read-labels nmem0 >/dev/null read 1 nmem",
"ndctl list --regions [ { \"dev\":\"region1\", \"size\":2156073582592, \"align\":16777216, \"available_size\":2117418876928, \"max_available_extent\":2117418876928, \"type\":\"pmem\", \"iset_id\":-9102197055295954944, \"badblock_count\":1, \"persistence_domain\":\"memory_controller\" }, { \"dev\":\"region0\", \"size\":2156073582592, \"align\":16777216, \"available_size\":2143188680704, \"max_available_extent\":2143188680704, \"type\":\"pmem\", \"iset_id\":736272362787276936, \"badblock_count\":3, \"persistence_domain\":\"memory_controller\" } ]",
"ndctl create-namespace --mode=sector --region=region N --size=namespace-size",
"ndctl create-namespace --mode=sector --region=region0 --size=36G { \"dev\":\"namespace0.1\", \"mode\":\"sector\", \"size\":\"35.96 GiB (38.62 GB)\", \"uuid\":\"ff5a0a16-3495-4ce8-b86b-f0e3bd9d1817\", \"sector_size\":4096, \"blockdev\":\"pmem0.1s\" }",
"ndctl list -RN -n namespace0.1 { \"regions\":[ { \"dev\":\"region0\", \"size\":2156073582592, \"align\":16777216, \"available_size\":2104533975040, \"max_available_extent\":2104533975040, \"type\":\"pmem\", \"iset_id\":736272362787276936, \"badblock_count\":3, \"persistence_domain\":\"memory_controller\", \"namespaces\":[ { \"dev\":\"namespace0.1\", \"mode\":\"sector\", \"size\":38615912448, \"uuid\":\"ff5a0a16-3495-4ce8-b86b-f0e3bd9d1817\", \"sector_size\":4096, \"blockdev\":\"pmem0.1s\" } ] } ] }",
"ndctl list --namespaces --idle [ { \"dev\":\"namespace1.0\", \"mode\":\"raw\", \"size\":34359738368, \"uuid\":\"ac951312-b312-4e76-9f15-6e00c8f2e6f4\" \"state\":\"disabled\", \"numa_node\":1 }, { \"dev\":\"namespace0.0\", \"mode\":\"raw\", \"size\":38615912448, \"uuid\":\"ff5a0a16-3495-4ce8-b86b-f0e3bd9d1817\", \"state\":\"disabled\", \"numa_node\":0 } ]",
"ndctl create-namespace --force --mode=devdax --reconfig= namespace-ID",
"ndctl create-namespace --force --mode=devdax --align=2M --reconfig=namespace0.1 { \"dev\":\"namespace0.1\", \"mode\":\"devdax\", \"map\":\"dev\", \"size\":\"35.44 GiB (38.05 GB)\", \"uuid\":\"426d6a52-df92-43d2-8cc7-046241d6d761\", \"daxregion\":{ \"id\":0, \"size\":\"35.44 GiB (38.05 GB)\", \"align\":2097152, \"devices\":[ { \"chardev\":\"dax0.1\", \"size\":\"35.44 GiB (38.05 GB)\", \"target_node\":4, \"mode\":\"devdax\" } ] }, \"align\":2097152 }",
"ndctl list --namespace namespace0.1 [ { \"dev\":\"namespace0.1\", \"mode\":\"devdax\", \"map\":\"dev\", \"size\":38048628736, \"uuid\":\"426d6a52-df92-43d2-8cc7-046241d6d761\", \"chardev\":\"dax0.1\", \"align\":2097152 } ]",
"ndctl read-labels nmem0 >/dev/null read 1 nmem",
"ndctl list --regions [ { \"dev\":\"region1\", \"size\":2156073582592, \"align\":16777216, \"available_size\":2117418876928, \"max_available_extent\":2117418876928, \"type\":\"pmem\", \"iset_id\":-9102197055295954944, \"badblock_count\":1, \"persistence_domain\":\"memory_controller\" }, { \"dev\":\"region0\", \"size\":2156073582592, \"align\":16777216, \"available_size\":2143188680704, \"max_available_extent\":2143188680704, \"type\":\"pmem\", \"iset_id\":736272362787276936, \"badblock_count\":3, \"persistence_domain\":\"memory_controller\" } ]",
"ndctl create-namespace --mode=devdax --region=region N --size= namespace-size",
"ndctl create-namespace --mode=devdax --region=region0 --align=2M --size=36G { \"dev\":\"namespace0.2\", \"mode\":\"devdax\", \"map\":\"dev\", \"size\":\"35.44 GiB (38.05 GB)\", \"uuid\":\"89d13f41-be6c-425b-9ec7-1e2a239b5303\", \"daxregion\":{ \"id\":0, \"size\":\"35.44 GiB (38.05 GB)\", \"align\":2097152, \"devices\":[ { \"chardev\":\"dax0.2\", \"size\":\"35.44 GiB (38.05 GB)\", \"target_node\":4, \"mode\":\"devdax\" } ] }, \"align\":2097152 }",
"ndctl list -RN -n namespace0.2 { \"regions\":[ { \"dev\":\"region0\", \"size\":2156073582592, \"align\":16777216, \"available_size\":2065879269376, \"max_available_extent\":2065879269376, \"type\":\"pmem\", \"iset_id\":736272362787276936, \"badblock_count\":3, \"persistence_domain\":\"memory_controller\", \"namespaces\":[ { \"dev\":\"namespace0.2\", \"mode\":\"devdax\", \"map\":\"dev\", \"size\":38048628736, \"uuid\":\"89d13f41-be6c-425b-9ec7-1e2a239b5303\", \"chardev\":\"dax0.2\", \"align\":2097152 } ] } ] }",
"ndctl list --namespaces --idle [ { \"dev\":\"namespace1.0\", \"mode\":\"raw\", \"size\":34359738368, \"uuid\":\"ac951312-b312-4e76-9f15-6e00c8f2e6f4\" \"state\":\"disabled\", \"numa_node\":1 }, { \"dev\":\"namespace0.0\", \"mode\":\"raw\", \"size\":38615912448, \"uuid\":\"ff5a0a16-3495-4ce8-b86b-f0e3bd9d1817\", \"state\":\"disabled\", \"numa_node\":0 } ]",
"ndctl create-namespace --force --mode=fsdax --reconfig= namespace-ID",
"ndctl create-namespace --force --mode=fsdax --reconfig=namespace0.0 { \"dev\":\"namespace0.0\", \"mode\":\"fsdax\", \"map\":\"dev\", \"size\":\"11.81 GiB (12.68 GB)\", \"uuid\":\"f8153ee3-c52d-4c6e-bc1d-197f5be38483\", \"sector_size\":512, \"align\":2097152, \"blockdev\":\"pmem0\" }",
"ndctl list --namespace namespace0.0 [ { \"dev\":\"namespace0.0\", \"mode\":\"fsdax\", \"map\":\"dev\", \"size\":12681478144, \"uuid\":\"f8153ee3-c52d-4c6e-bc1d-197f5be38483\", \"sector_size\":512, \"align\":2097152, \"blockdev\":\"pmem0\" } ]",
"ndctl read-labels nmem0 >/dev/null read 1 nmem",
"ndctl list --regions [ { \"dev\":\"region1\", \"size\":2156073582592, \"align\":16777216, \"available_size\":2117418876928, \"max_available_extent\":2117418876928, \"type\":\"pmem\", \"iset_id\":-9102197055295954944, \"badblock_count\":1, \"persistence_domain\":\"memory_controller\" }, { \"dev\":\"region0\", \"size\":2156073582592, \"align\":16777216, \"available_size\":2143188680704, \"max_available_extent\":2143188680704, \"type\":\"pmem\", \"iset_id\":736272362787276936, \"badblock_count\":3, \"persistence_domain\":\"memory_controller\" } ]",
"ndctl create-namespace --mode=fsdax --region=region N --size= namespace-size",
"ndctl create-namespace --mode=fsdax --region=region0 --size=36G { \"dev\":\"namespace0.3\", \"mode\":\"fsdax\", \"map\":\"dev\", \"size\":\"35.44 GiB (38.05 GB)\", \"uuid\":\"99e77865-42eb-4b82-9db6-c6bc9b3959c2\", \"sector_size\":512, \"align\":2097152, \"blockdev\":\"pmem0.3\" }",
"ndctl list -RN -n namespace0.3 { \"regions\":[ { \"dev\":\"region0\", \"size\":2156073582592, \"align\":16777216, \"available_size\":2027224563712, \"max_available_extent\":2027224563712, \"type\":\"pmem\", \"iset_id\":736272362787276936, \"badblock_count\":3, \"persistence_domain\":\"memory_controller\", \"namespaces\":[ { \"dev\":\"namespace0.3\", \"mode\":\"fsdax\", \"map\":\"dev\", \"size\":38048628736, \"uuid\":\"99e77865-42eb-4b82-9db6-c6bc9b3959c2\", \"sector_size\":512, \"align\":2097152, \"blockdev\":\"pmem0.3\" } ] } ] }",
"mkfs.xfs -d su=2m,sw=1 fsdax-partition-or-device",
"mount f_sdax-partition-or-device mount-point_",
"modprobe acpi_ipmi",
"ndctl list --dimms --health [ { \"dev\":\"nmem1\", \"id\":\"8089-a2-1834-00001f13\", \"handle\":17, \"phys_id\":32, \"security\":\"disabled\", \"health\":{ \"health_state\":\"ok\", \"temperature_celsius\":36.0, \"controller_temperature_celsius\":37.0, \"spares_percentage\":100, \"alarm_temperature\":false, \"alarm_controller_temperature\":false, \"alarm_spares\":false, \"alarm_enabled_media_temperature\":true, \"temperature_threshold\":82.0, \"alarm_enabled_ctrl_temperature\":true, \"controller_temperature_threshold\":98.0, \"alarm_enabled_spares\":true, \"spares_threshold\":50, \"shutdown_state\":\"clean\", \"shutdown_count\":4 } }, [...] ]",
"ndctl list --dimms --regions --health { \"dimms\":[ { \"dev\":\"nmem1\", \"id\":\"8089-a2-1834-00001f13\", \"handle\":17, \"phys_id\":32, \"security\":\"disabled\", \"health\":{ \"health_state\":\"ok\", \"temperature_celsius\":35.0, [...] } [...] }",
"ndctl list --dimms --human",
"ndctl list --dimms --human [ { \"dev\":\"nmem1\", \"id\":\"XXXX-XX-XXXX-XXXXXXXX\", \"handle\":\"0x120\", \"phys_id\":\"0x1c\" }, { \"dev\":\"nmem0\", \"id\":\"XXXX-XX-XXXX-XXXXXXXX\", \"handle\":\"0x20\", \"phys_id\":\"0x10\", \"flag_failed_flush\":true, \"flag_smart_event\":true } ]",
"dmidecode",
"dmidecode Handle 0x0010, DMI type 17, 40 bytes Memory Device Array Handle: 0x0004 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 125 GB Form Factor: DIMM Set: 1 Locator: DIMM-XXX-YYYY Bank Locator: Bank0 Type: Other Type Detail: Non-Volatile Registered (Buffered)",
"ndctl list --namespaces --dimm= DIMM-ID-number",
"ndctl list --namespaces --dimm=0 [ { \"dev\":\"namespace0.2\", \"mode\":\"sector\", \"size\":67042312192, \"uuid\":\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\", \"raw_uuid\":\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\", \"sector_size\":4096, \"blockdev\":\"pmem0.2s\", \"numa_node\":0 }, { \"dev\":\"namespace0.0\", \"mode\":\"sector\", \"size\":67042312192, \"uuid\":\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\", \"raw_uuid\":\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\", \"sector_size\":4096, \"blockdev\":\"pmem0s\", \"numa_node\":0 } ]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_storage_devices/using-nvdimm-persistent-memory-storage_managing-storage-devices |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_google_cloud/making-open-source-more-inclusive |
Chapter 8. Scaling storage capacity of GCP OpenShift Data Foundation cluster | Chapter 8. Scaling storage capacity of GCP OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on GCP cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 8.1. Scaling up storage capacity on a cluster To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 8.2. Scaling out storage capacity on a GCP cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 8.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 8.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 8.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster . | [
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/scaling_storage/scaling_storage_capacity_of_gcp_openshift_data_foundation_cluster |
Chapter 5. Pipelines CLI (tkn) | Chapter 5. Pipelines CLI (tkn) 5.1. Installing tkn Use the tkn CLI to manage Red Hat OpenShift Pipelines from a terminal. The following section describes how to install tkn on different platforms. You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools . 5.1.1. Installing Red Hat OpenShift Pipelines CLI (tkn) on Linux For Linux distributions, you can download the CLI directly as a tar.gz archive. Procedure Download the relevant CLI. Linux (x86_64, amd64) Linux on IBM Z and LinuxONE (s390x) Linux on IBM Power Systems (ppc64le) Unpack the archive: USD tar xvzf <file> Place the tkn binary in a directory that is on your PATH . To check your PATH , run: USD echo USDPATH 5.1.2. Installing Red Hat OpenShift Pipelines CLI (tkn) on Linux using an RPM For Red Hat Enterprise Linux (RHEL) version 8, you can install the Red Hat OpenShift Pipelines CLI ( tkn ) as an RPM. Prerequisites You have an active OpenShift Container Platform subscription on your Red Hat account. You have root or sudo privileges on your local system. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*pipelines*' In the output for the command, find the pool ID for your OpenShift Container Platform subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by Red Hat OpenShift Pipelines: Linux (x86_64, amd64) # subscription-manager repos --enable="pipelines-1.7-for-rhel-8-x86_64-rpms" Linux on IBM Z and LinuxONE (s390x) # subscription-manager repos --enable="pipelines-1.7-for-rhel-8-s390x-rpms" Linux on IBM Power Systems (ppc64le) # subscription-manager repos --enable="pipelines-1.7-for-rhel-8-ppc64le-rpms" Install the openshift-pipelines-client package: # yum install openshift-pipelines-client After you install the CLI, it is available using the tkn command: USD tkn version 5.1.3. Installing Red Hat OpenShift Pipelines CLI (tkn) on Windows For Windows, the tkn CLI is provided as a zip archive. Procedure Download the CLI . Unzip the archive with a ZIP program. Add the location of your tkn.exe file to your PATH environment variable. To check your PATH , open the command prompt and run the command: C:\> path 5.1.4. Installing Red Hat OpenShift Pipelines CLI (tkn) on macOS For macOS, the tkn CLI is provided as a tar.gz archive. Procedure Download the CLI . Unpack and unzip the archive. Move the tkn binary to a directory on your PATH. To check your PATH , open a terminal window and run: USD echo USDPATH 5.2. Configuring the OpenShift Pipelines tkn CLI Configure the Red Hat OpenShift Pipelines tkn CLI to enable tab completion. 5.2.1. Enabling tab completion After you install the tkn CLI, you can enable tab completion to automatically complete tkn commands or suggest options when you press Tab. Prerequisites You must have the tkn CLI tool installed. You must have bash-completion installed on your local system. Procedure The following procedure enables tab completion for Bash. Save the Bash completion code to a file: USD tkn completion bash > tkn_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp tkn_bash_completion /etc/bash_completion.d/ Alternatively, you can save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal. 5.3. OpenShift Pipelines tkn reference This section lists the basic tkn CLI commands. 5.3.1. Basic syntax tkn [command or options] [arguments... ] 5.3.2. Global options --help, -h 5.3.3. Utility commands 5.3.3.1. tkn Parent command for tkn CLI. Example: Display all options USD tkn 5.3.3.2. completion [shell] Print shell completion code which must be evaluated to provide interactive completion. Supported shells are bash and zsh . Example: Completion code for bash shell USD tkn completion bash 5.3.3.3. version Print version information of the tkn CLI. Example: Check the tkn version USD tkn version 5.3.4. Pipelines management commands 5.3.4.1. pipeline Manage pipelines. Example: Display help USD tkn pipeline --help 5.3.4.2. pipeline delete Delete a pipeline. Example: Delete the mypipeline pipeline from a namespace USD tkn pipeline delete mypipeline -n myspace 5.3.4.3. pipeline describe Describe a pipeline. Example: Describe the mypipeline pipeline USD tkn pipeline describe mypipeline 5.3.4.4. pipeline list Display a list of pipelines. Example: Display a list of pipelines USD tkn pipeline list 5.3.4.5. pipeline logs Display the logs for a specific pipeline. Example: Stream the live logs for the mypipeline pipeline USD tkn pipeline logs -f mypipeline 5.3.4.6. pipeline start Start a pipeline. Example: Start the mypipeline pipeline USD tkn pipeline start mypipeline 5.3.5. Pipeline run commands 5.3.5.1. pipelinerun Manage pipeline runs. Example: Display help USD tkn pipelinerun -h 5.3.5.2. pipelinerun cancel Cancel a pipeline run. Example: Cancel the mypipelinerun pipeline run from a namespace USD tkn pipelinerun cancel mypipelinerun -n myspace 5.3.5.3. pipelinerun delete Delete a pipeline run. Example: Delete pipeline runs from a namespace USD tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace Example: Delete all pipeline runs from a namespace, except the five most recently executed pipeline runs USD tkn pipelinerun delete -n myspace --keep 5 1 1 Replace 5 with the number of most recently executed pipeline runs you want to retain. Example: Delete all pipelines USD tkn pipelinerun delete --all Note Starting with Red Hat OpenShift Pipelines 1.6, the tkn pipelinerun delete --all command does not delete any resources that are in the running state. 5.3.5.4. pipelinerun describe Describe a pipeline run. Example: Describe the mypipelinerun pipeline run in a namespace USD tkn pipelinerun describe mypipelinerun -n myspace 5.3.5.5. pipelinerun list List pipeline runs. Example: Display a list of pipeline runs in a namespace USD tkn pipelinerun list -n myspace 5.3.5.6. pipelinerun logs Display the logs of a pipeline run. Example: Display the logs of the mypipelinerun pipeline run with all tasks and steps in a namespace USD tkn pipelinerun logs mypipelinerun -a -n myspace 5.3.6. Task management commands 5.3.6.1. task Manage tasks. Example: Display help USD tkn task -h 5.3.6.2. task delete Delete a task. Example: Delete mytask1 and mytask2 tasks from a namespace USD tkn task delete mytask1 mytask2 -n myspace 5.3.6.3. task describe Describe a task. Example: Describe the mytask task in a namespace USD tkn task describe mytask -n myspace 5.3.6.4. task list List tasks. Example: List all the tasks in a namespace USD tkn task list -n myspace 5.3.6.5. task logs Display task logs. Example: Display logs for the mytaskrun task run of the mytask task USD tkn task logs mytask mytaskrun -n myspace 5.3.6.6. task start Start a task. Example: Start the mytask task in a namespace USD tkn task start mytask -s <ServiceAccountName> -n myspace 5.3.7. Task run commands 5.3.7.1. taskrun Manage task runs. Example: Display help USD tkn taskrun -h 5.3.7.2. taskrun cancel Cancel a task run. Example: Cancel the mytaskrun task run from a namespace USD tkn taskrun cancel mytaskrun -n myspace 5.3.7.3. taskrun delete Delete a TaskRun. Example: Delete the mytaskrun1 and mytaskrun2 task runs from a namespace USD tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace Example: Delete all but the five most recently executed task runs from a namespace USD tkn taskrun delete -n myspace --keep 5 1 1 Replace 5 with the number of most recently executed task runs you want to retain. 5.3.7.4. taskrun describe Describe a task run. Example: Describe the mytaskrun task run in a namespace USD tkn taskrun describe mytaskrun -n myspace 5.3.7.5. taskrun list List task runs. Example: List all the task runs in a namespace USD tkn taskrun list -n myspace 5.3.7.6. taskrun logs Display task run logs. Example: Display live logs for the mytaskrun task run in a namespace USD tkn taskrun logs -f mytaskrun -n myspace 5.3.8. Condition management commands 5.3.8.1. condition Manage Conditions. Example: Display help USD tkn condition --help 5.3.8.2. condition delete Delete a Condition. Example: Delete the mycondition1 Condition from a namespace USD tkn condition delete mycondition1 -n myspace 5.3.8.3. condition describe Describe a Condition. Example: Describe the mycondition1 Condition in a namespace USD tkn condition describe mycondition1 -n myspace 5.3.8.4. condition list List Conditions. Example: List Conditions in a namespace USD tkn condition list -n myspace 5.3.9. Pipeline Resource management commands 5.3.9.1. resource Manage Pipeline Resources. Example: Display help USD tkn resource -h 5.3.9.2. resource create Create a Pipeline Resource. Example: Create a Pipeline Resource in a namespace USD tkn resource create -n myspace This is an interactive command that asks for input on the name of the Resource, type of the Resource, and the values based on the type of the Resource. 5.3.9.3. resource delete Delete a Pipeline Resource. Example: Delete the myresource Pipeline Resource from a namespace USD tkn resource delete myresource -n myspace 5.3.9.4. resource describe Describe a Pipeline Resource. Example: Describe the myresource Pipeline Resource USD tkn resource describe myresource -n myspace 5.3.9.5. resource list List Pipeline Resources. Example: List all Pipeline Resources in a namespace USD tkn resource list -n myspace 5.3.10. ClusterTask management commands 5.3.10.1. clustertask Manage ClusterTasks. Example: Display help USD tkn clustertask --help 5.3.10.2. clustertask delete Delete a ClusterTask resource in a cluster. Example: Delete mytask1 and mytask2 ClusterTasks USD tkn clustertask delete mytask1 mytask2 5.3.10.3. clustertask describe Describe a ClusterTask. Example: Describe the mytask ClusterTask USD tkn clustertask describe mytask1 5.3.10.4. clustertask list List ClusterTasks. Example: List ClusterTasks USD tkn clustertask list 5.3.10.5. clustertask start Start ClusterTasks. Example: Start the mytask ClusterTask USD tkn clustertask start mytask 5.3.11. Trigger management commands 5.3.11.1. eventlistener Manage EventListeners. Example: Display help USD tkn eventlistener -h 5.3.11.2. eventlistener delete Delete an EventListener. Example: Delete mylistener1 and mylistener2 EventListeners in a namespace USD tkn eventlistener delete mylistener1 mylistener2 -n myspace 5.3.11.3. eventlistener describe Describe an EventListener. Example: Describe the mylistener EventListener in a namespace USD tkn eventlistener describe mylistener -n myspace 5.3.11.4. eventlistener list List EventListeners. Example: List all the EventListeners in a namespace USD tkn eventlistener list -n myspace 5.3.11.5. eventlistener logs Display logs of an EventListener. Example: Display the logs of the mylistener EventListener in a namespace USD tkn eventlistener logs mylistener -n myspace 5.3.11.6. triggerbinding Manage TriggerBindings. Example: Display TriggerBindings help USD tkn triggerbinding -h 5.3.11.7. triggerbinding delete Delete a TriggerBinding. Example: Delete mybinding1 and mybinding2 TriggerBindings in a namespace USD tkn triggerbinding delete mybinding1 mybinding2 -n myspace 5.3.11.8. triggerbinding describe Describe a TriggerBinding. Example: Describe the mybinding TriggerBinding in a namespace USD tkn triggerbinding describe mybinding -n myspace 5.3.11.9. triggerbinding list List TriggerBindings. Example: List all the TriggerBindings in a namespace USD tkn triggerbinding list -n myspace 5.3.11.10. triggertemplate Manage TriggerTemplates. Example: Display TriggerTemplate help USD tkn triggertemplate -h 5.3.11.11. triggertemplate delete Delete a TriggerTemplate. Example: Delete mytemplate1 and mytemplate2 TriggerTemplates in a namespace USD tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace` 5.3.11.12. triggertemplate describe Describe a TriggerTemplate. Example: Describe the mytemplate TriggerTemplate in a namespace USD tkn triggertemplate describe mytemplate -n `myspace` 5.3.11.13. triggertemplate list List TriggerTemplates. Example: List all the TriggerTemplates in a namespace USD tkn triggertemplate list -n myspace 5.3.11.14. clustertriggerbinding Manage ClusterTriggerBindings. Example: Display ClusterTriggerBindings help USD tkn clustertriggerbinding -h 5.3.11.15. clustertriggerbinding delete Delete a ClusterTriggerBinding. Example: Delete myclusterbinding1 and myclusterbinding2 ClusterTriggerBindings USD tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2 5.3.11.16. clustertriggerbinding describe Describe a ClusterTriggerBinding. Example: Describe the myclusterbinding ClusterTriggerBinding USD tkn clustertriggerbinding describe myclusterbinding 5.3.11.17. clustertriggerbinding list List ClusterTriggerBindings. Example: List all ClusterTriggerBindings USD tkn clustertriggerbinding list 5.3.12. Hub interaction commands Interact with Tekton Hub for resources such as tasks and pipelines. 5.3.12.1. hub Interact with hub. Example: Display help USD tkn hub -h Example: Interact with a hub API server USD tkn hub --api-server https://api.hub.tekton.dev Note For each example, to get the corresponding sub-commands and flags, run tkn hub <command> --help . 5.3.12.2. hub downgrade Downgrade an installed resource. Example: Downgrade the mytask task in the mynamespace namespace to it's older version USD tkn hub downgrade task mytask --to version -n mynamespace 5.3.12.3. hub get Get a resource manifest by its name, kind, catalog, and version. Example: Get the manifest for a specific version of the myresource pipeline or task from the tekton catalog USD tkn hub get [pipeline | task] myresource --from tekton --version version 5.3.12.4. hub info Display information about a resource by its name, kind, catalog, and version. Example: Display information about a specific version of the mytask task from the tekton catalog USD tkn hub info task mytask --from tekton --version version 5.3.12.5. hub install Install a resource from a catalog by its kind, name, and version. Example: Install a specific version of the mytask task from the tekton catalog in the mynamespace namespace USD tkn hub install task mytask --from tekton --version version -n mynamespace 5.3.12.6. hub reinstall Reinstall a resource by its kind and name. Example: Reinstall a specific version of the mytask task from the tekton catalog in the mynamespace namespace USD tkn hub reinstall task mytask --from tekton --version version -n mynamespace 5.3.12.7. hub search Search a resource by a combination of name, kind, and tags. Example: Search a resource with a tag cli USD tkn hub search --tags cli 5.3.12.8. hub upgrade Upgrade an installed resource. Example: Upgrade the installed mytask task in the mynamespace namespace to a new version USD tkn hub upgrade task mytask --to version -n mynamespace | [
"tar xvzf <file>",
"echo USDPATH",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches '*pipelines*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"pipelines-1.7-for-rhel-8-x86_64-rpms\"",
"subscription-manager repos --enable=\"pipelines-1.7-for-rhel-8-s390x-rpms\"",
"subscription-manager repos --enable=\"pipelines-1.7-for-rhel-8-ppc64le-rpms\"",
"yum install openshift-pipelines-client",
"tkn version",
"C:\\> path",
"echo USDPATH",
"tkn completion bash > tkn_bash_completion",
"sudo cp tkn_bash_completion /etc/bash_completion.d/",
"tkn",
"tkn completion bash",
"tkn version",
"tkn pipeline --help",
"tkn pipeline delete mypipeline -n myspace",
"tkn pipeline describe mypipeline",
"tkn pipeline list",
"tkn pipeline logs -f mypipeline",
"tkn pipeline start mypipeline",
"tkn pipelinerun -h",
"tkn pipelinerun cancel mypipelinerun -n myspace",
"tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace",
"tkn pipelinerun delete -n myspace --keep 5 1",
"tkn pipelinerun delete --all",
"tkn pipelinerun describe mypipelinerun -n myspace",
"tkn pipelinerun list -n myspace",
"tkn pipelinerun logs mypipelinerun -a -n myspace",
"tkn task -h",
"tkn task delete mytask1 mytask2 -n myspace",
"tkn task describe mytask -n myspace",
"tkn task list -n myspace",
"tkn task logs mytask mytaskrun -n myspace",
"tkn task start mytask -s <ServiceAccountName> -n myspace",
"tkn taskrun -h",
"tkn taskrun cancel mytaskrun -n myspace",
"tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace",
"tkn taskrun delete -n myspace --keep 5 1",
"tkn taskrun describe mytaskrun -n myspace",
"tkn taskrun list -n myspace",
"tkn taskrun logs -f mytaskrun -n myspace",
"tkn condition --help",
"tkn condition delete mycondition1 -n myspace",
"tkn condition describe mycondition1 -n myspace",
"tkn condition list -n myspace",
"tkn resource -h",
"tkn resource create -n myspace",
"tkn resource delete myresource -n myspace",
"tkn resource describe myresource -n myspace",
"tkn resource list -n myspace",
"tkn clustertask --help",
"tkn clustertask delete mytask1 mytask2",
"tkn clustertask describe mytask1",
"tkn clustertask list",
"tkn clustertask start mytask",
"tkn eventlistener -h",
"tkn eventlistener delete mylistener1 mylistener2 -n myspace",
"tkn eventlistener describe mylistener -n myspace",
"tkn eventlistener list -n myspace",
"tkn eventlistener logs mylistener -n myspace",
"tkn triggerbinding -h",
"tkn triggerbinding delete mybinding1 mybinding2 -n myspace",
"tkn triggerbinding describe mybinding -n myspace",
"tkn triggerbinding list -n myspace",
"tkn triggertemplate -h",
"tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace`",
"tkn triggertemplate describe mytemplate -n `myspace`",
"tkn triggertemplate list -n myspace",
"tkn clustertriggerbinding -h",
"tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2",
"tkn clustertriggerbinding describe myclusterbinding",
"tkn clustertriggerbinding list",
"tkn hub -h",
"tkn hub --api-server https://api.hub.tekton.dev",
"tkn hub downgrade task mytask --to version -n mynamespace",
"tkn hub get [pipeline | task] myresource --from tekton --version version",
"tkn hub info task mytask --from tekton --version version",
"tkn hub install task mytask --from tekton --version version -n mynamespace",
"tkn hub reinstall task mytask --from tekton --version version -n mynamespace",
"tkn hub search --tags cli",
"tkn hub upgrade task mytask --to version -n mynamespace"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/cli_tools/pipelines-cli-tkn |
Chapter 21. Installing on vSphere | Chapter 21. Installing on vSphere 21.1. Preparing to install on vSphere 21.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use Telemetry, you configured the firewall to allow the sites required by your cluster. You reviewed your VMware platform licenses. Red Hat does not place any restrictions on your VMware licenses, but some VMware infrastructure components require licensing. 21.1.2. Choosing a method to install OpenShift Container Platform on vSphere You can install OpenShift Container Platform on vSphere by using installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provide. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See the Installation process for more information about installer-provisioned and user-provisioned installation processes. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 21.1.2.1. Installer-provisioned infrastructure installation of OpenShift Container Platform on vSphere Installer-provisioned infrastructure allows the installation program to preconfigure and automate the provisioning of resources required by OpenShift Container Platform. Installing a cluster on vSphere : You can install OpenShift Container Platform on vSphere by using installer-provisioned infrastructure installation with no customization. Installing a cluster on vSphere with customizations : You can install OpenShift Container Platform on vSphere by using installer-provisioned infrastructure installation with the default customization options. Installing a cluster on vSphere with network customizations : You can install OpenShift Container Platform on installer-provisioned vSphere infrastructure, with network customizations. You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on vSphere in a restricted network : You can install a cluster on VMware vSphere infrastructure in a restricted network by creating an internal mirror of the installation release content. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. 21.1.2.2. User-provisioned infrastructure installation of OpenShift Container Platform on vSphere User-provisioned infrastructure requires the user to provision all resources required by OpenShift Container Platform. Installing a cluster on vSphere with user-provisioned infrastructure : You can install OpenShift Container Platform on VMware vSphere infrastructure that you provision. Installing a cluster on vSphere with network customizations with user-provisioned infrastructure : You can install OpenShift Container Platform on VMware vSphere infrastructure that you provision with customized network configuration options. Installing a cluster on vSphere in a restricted network with user-provisioned infrastructure : OpenShift Container Platform can be installed on VMware vSphere infrastructure that you provision in a restricted network. 21.1.3. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 7 instance that meets the requirements for the components that you use. Note OpenShift Container Platform version 4.11 does not support VMware vSphere version 8.0. You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 21.1. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 15 or later vSphere ESXi hosts 7 vCenter host 7 Important Installing a cluster on VMware vSphere version 7.0 Update 1 or earlier is now deprecated. These versions are still fully supported, but version 4.11 of OpenShift Container Platform requires vSphere virtual hardware version 15 or later. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. If your vSphere nodes are below hardware version 15 or your VMware vSphere version is earlier than 6.7.3, upgrading from OpenShift Container Platform 4.10 to OpenShift Container Platform 4.11 is not available. Table 21.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7 with HW version 15 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7 This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7 vSphere 7 is required for OpenShift Container Platform. For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 21.1.4. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version 7.0 Update 1 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster Important If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. If you continue with the third-party vSphere CSI driver when upgrading to the major version of OpenShift Container Platform, the oc CLI prompts you with the following message: VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver The message informs you that Red Hat does not support the third-party vSphere CSI driver during an OpenShift Container Platform upgrade operation. You can choose to ignore this message and continue with the upgrade operation. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . 21.1.5. Uninstalling an installer-provisioned infrastructure installation of OpenShift Container Platform on vSphere Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure : You can remove a cluster that you deployed on VMware vSphere infrastructure that used installer-provisioned infrastructure. 21.2. Installing a cluster on vSphere In OpenShift Container Platform version 4.11, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 21.2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 21.2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 21.2.3. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 7 instance that meets the requirements for the components that you use. Note OpenShift Container Platform version 4.11 does not support VMware vSphere version 8.0. You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 21.3. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 15 or later vSphere ESXi hosts 7 vCenter host 7 Important Installing a cluster on VMware vSphere version 7.0 Update 1 or earlier is now deprecated. These versions are still fully supported, but version 4.11 of OpenShift Container Platform requires vSphere virtual hardware version 15 or later. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. If your vSphere nodes are below hardware version 15 or your VMware vSphere version is earlier than 6.7.3, upgrading from OpenShift Container Platform 4.10 to OpenShift Container Platform 4.11 is not available. Table 21.4. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7 with HW version 15 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7 This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7 vSphere 7 is required for OpenShift Container Platform. For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 21.2.4. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 21.5. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 21.6. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 21.7. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 21.2.5. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version 7.0 Update 1 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster Important If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. If you continue with the third-party vSphere CSI driver when upgrading to the major version of OpenShift Container Platform, the oc CLI prompts you with the following message: VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver The message informs you that Red Hat does not support the third-party vSphere CSI driver during an OpenShift Container Platform upgrade operation. You can choose to ignore this message and continue with the upgrade operation. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 21.2.6. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 21.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 21.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 21.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . Using Storage vMotion can cause issues and is not supported. If you are using vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use the Dynamic Host Configuration Protocol (DHCP) for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. In the DHCP lease, you must configure the DHCP to use the default gateway. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 21.8. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 21.2.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 21.2.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Important If you attempt to run the installation program on macOS, a known issue related to the golang compiler causes the installation of the OpenShift Container Platform cluster to fail. For more information about this issue, see the section named "Known Issues" in the OpenShift Container Platform 4.11 release notes document. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 21.2.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 21.2.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Important Some VMware vCenter Single Sign-On (SSO) environments with Active Directory (AD) integration might primarily require you to use the traditional login method, which requires the <domain>\ construct. To ensure that vCenter account permission checks complete properly, consider using the User Principal Name (UPN) login method, such as <username>@<fully_qualified_domainname> . Select the data center in your vCenter instance to connect to. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 21.2.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 21.2.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 21.2.13. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 21.2.13.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . 21.2.13.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 21.2.13.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 21.2.13.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 21.2.14. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 21.2.15. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 21.2.16. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 21.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 21.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 21.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment Considerations For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.1000.0.0.0:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.0168.21.2101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.1020.2.3:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.1030.2.1:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 21.2.17. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 21.3. Installing a cluster on vSphere with customizations In OpenShift Container Platform version 4.11, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 21.3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 21.3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 21.3.3. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 7 instance that meets the requirements for the components that you use. Note OpenShift Container Platform version 4.11 does not support VMware vSphere version 8.0. You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 21.9. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 15 or later vSphere ESXi hosts 7 vCenter host 7 Important Installing a cluster on VMware vSphere version 7.0 Update 1 or earlier is now deprecated. These versions are still fully supported, but version 4.11 of OpenShift Container Platform requires vSphere virtual hardware version 15 or later. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. If your vSphere nodes are below hardware version 15 or your VMware vSphere version is earlier than 6.7.3, upgrading from OpenShift Container Platform 4.10 to OpenShift Container Platform 4.11 is not available. Table 21.10. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7 with HW version 15 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7 This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7 vSphere 7 is required for OpenShift Container Platform. For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 21.3.4. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 21.11. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 21.12. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 21.13. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 21.3.5. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version 7.0 Update 1 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster Important If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. If you continue with the third-party vSphere CSI driver when upgrading to the major version of OpenShift Container Platform, the oc CLI prompts you with the following message: VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver The message informs you that Red Hat does not support the third-party vSphere CSI driver during an OpenShift Container Platform upgrade operation. You can choose to ignore this message and continue with the upgrade operation. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 21.3.6. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 21.4. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 21.5. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 21.6. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . Using Storage vMotion can cause issues and is not supported. If you are using vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use the Dynamic Host Configuration Protocol (DHCP) for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. In the DHCP lease, you must configure the DHCP to use the default gateway. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 21.14. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 21.3.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 21.3.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Important If you attempt to run the installation program on macOS, a known issue related to the golang compiler causes the installation of the OpenShift Container Platform cluster to fail. For more information about this issue, see the section named "Known Issues" in the OpenShift Container Platform 4.11 release notes document. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 21.3.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 21.3.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 21.3.10.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 21.3.10.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 21.15. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 21.3.10.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 21.16. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 21.3.10.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 21.17. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 21.3.10.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 21.18. Additional VMware vSphere cluster parameters Parameter Description Values The fully-qualified hostname or IP address of the vCenter server. String The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String The password for the vCenter user name. String The name of the datacenter to use in the vCenter instance. String The name of the default datastore to use for provisioning volumes. String Optional. The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . Optional. The absolute path of an existing resource pool where the installer creates the virtual machines. If you do not specify a value, resources are installed in the root of the cluster /<datacenter_name>/host/<cluster_name>/Resources . String, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String The vCenter cluster to install the OpenShift Container Platform cluster in. String The virtual IP (VIP) address that you configured for control plane API access. An IP address, for example 128.0.0.1 . The virtual IP (VIP) address that you configured for cluster ingress. An IP address, for example 128.0.0.1 . Optional. The disk provisioning method. This value defaults to the vSphere default storage policy if not set. Valid values are thin , thick , or eagerZeroedThick . 21.3.10.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 21.19. Optional VMware vSphere machine pool parameters Parameter Description Values The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . The size of the disk in gigabytes. Integer The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer The size of a virtual machine's memory in megabytes. Integer 21.3.10.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 7 diskType: thin 8 network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 5 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 6 The cluster name that you specified in your DNS records. 7 Optional: Provide an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 8 The vSphere disk provisioning method. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. 21.3.10.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 21.3.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 21.3.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 21.3.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 21.3.14. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 21.3.14.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . 21.3.14.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 21.3.14.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 21.3.14.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 21.3.15. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 21.3.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 21.3.17. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 21.4. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 21.5. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 21.6. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment Considerations For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.1000.0.0.0:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.0168.21.2101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.1020.2.3:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.1030.2.1:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 21.3.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 21.4. Installing a cluster on vSphere with network customizations In OpenShift Container Platform version 4.11, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 21.4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, confirm with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 21.4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 21.4.3. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 7 instance that meets the requirements for the components that you use. Note OpenShift Container Platform version 4.11 does not support VMware vSphere version 8.0. You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 21.20. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 15 or later vSphere ESXi hosts 7 vCenter host 7 Important Installing a cluster on VMware vSphere version 7.0 Update 1 or earlier is now deprecated. These versions are still fully supported, but version 4.11 of OpenShift Container Platform requires vSphere virtual hardware version 15 or later. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. If your vSphere nodes are below hardware version 15 or your VMware vSphere version is earlier than 6.7.3, upgrading from OpenShift Container Platform 4.10 to OpenShift Container Platform 4.11 is not available. Table 21.21. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7 with HW version 15 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7 This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7 vSphere 7 is required for OpenShift Container Platform. For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 21.4.4. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 21.22. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 21.23. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 21.24. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 21.4.5. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version 7.0 Update 1 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster Important If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. If you continue with the third-party vSphere CSI driver when upgrading to the major version of OpenShift Container Platform, the oc CLI prompts you with the following message: VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver The message informs you that Red Hat does not support the third-party vSphere CSI driver during an OpenShift Container Platform upgrade operation. You can choose to ignore this message and continue with the upgrade operation. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 21.4.6. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 21.7. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 21.8. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 21.9. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . Using Storage vMotion can cause issues and is not supported. If you are using vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use the Dynamic Host Configuration Protocol (DHCP) for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. In the DHCP lease, you must configure the DHCP to use the default gateway. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 21.25. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 21.4.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 21.4.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Important If you attempt to run the installation program on macOS, a known issue related to the golang compiler causes the installation of the OpenShift Container Platform cluster to fail. For more information about this issue, see the section named "Known Issues" in the OpenShift Container Platform 4.11 release notes document. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 21.4.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 21.4.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 21.4.10.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 21.4.10.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 21.26. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 21.4.10.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 21.27. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 21.4.10.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 21.28. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 21.4.10.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 21.29. Additional VMware vSphere cluster parameters Parameter Description Values The fully-qualified hostname or IP address of the vCenter server. String The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String The password for the vCenter user name. String The name of the datacenter to use in the vCenter instance. String The name of the default datastore to use for provisioning volumes. String Optional. The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . Optional. The absolute path of an existing resource pool where the installer creates the virtual machines. If you do not specify a value, resources are installed in the root of the cluster /<datacenter_name>/host/<cluster_name>/Resources . String, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String The vCenter cluster to install the OpenShift Container Platform cluster in. String The virtual IP (VIP) address that you configured for control plane API access. An IP address, for example 128.0.0.1 . The virtual IP (VIP) address that you configured for cluster ingress. An IP address, for example 128.0.0.1 . Optional. The disk provisioning method. This value defaults to the vSphere default storage policy if not set. Valid values are thin , thick , or eagerZeroedThick . 21.4.10.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 21.30. Optional VMware vSphere machine pool parameters Parameter Description Values The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . The size of the disk in gigabytes. Integer The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer The size of a virtual machine's memory in megabytes. Integer 21.4.10.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 7 diskType: thin 8 network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 5 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 6 The cluster name that you specified in your DNS records. 7 Optional: Provide an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 8 The vSphere disk provisioning method. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. 21.4.10.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 21.4.11. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to Installation configuration parameters . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Important The CIDR range 172.17.0.0/16 is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2. 21.4.12. Specifying advanced network configuration You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 21.4.13. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 21.4.13.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 21.31. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 21.32. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 21.33. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 21.34. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 21.35. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 21.36. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 21.37. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 21.4.14. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 21.4.15. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 21.4.16. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 21.4.17. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 21.4.17.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . 21.4.17.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 21.4.17.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 21.4.17.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 21.4.18. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 21.4.19. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 21.4.20. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 21.7. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 21.8. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 21.9. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment Considerations For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.1000.0.0.0:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.0168.21.2101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.1020.2.3:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.1030.2.1:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 21.4.21. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 21.5. Installing a cluster on vSphere with user-provisioned infrastructure In OpenShift Container Platform version 4.11, you can install a cluster on VMware vSphere infrastructure that you provision. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 21.5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 21.5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 21.5.3. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 7 instance that meets the requirements for the components that you use. Note OpenShift Container Platform version 4.11 does not support VMware vSphere version 8.0. You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 21.38. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 15 or later vSphere ESXi hosts 7 vCenter host 7 Important Installing a cluster on VMware vSphere version 7.0 Update 1 or earlier is now deprecated. These versions are still fully supported, but version 4.11 of OpenShift Container Platform requires vSphere virtual hardware version 15 or later. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. If your vSphere nodes are below hardware version 15 or your VMware vSphere version is earlier than 6.7.3, upgrading from OpenShift Container Platform 4.10 to OpenShift Container Platform 4.11 is not available. Table 21.39. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7 with HW version 15 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7 This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7 vSphere 7 is required for OpenShift Container Platform. For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 21.5.4. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version 7.0 Update 1 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster Important If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. If you continue with the third-party vSphere CSI driver when upgrading to the major version of OpenShift Container Platform, the oc CLI prompts you with the following message: VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver The message informs you that Red Hat does not support the third-party vSphere CSI driver during an OpenShift Container Platform upgrade operation. You can choose to ignore this message and continue with the upgrade operation. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 21.5.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 21.5.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 21.40. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 21.5.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 21.41. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) [1] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [2] 2 8 GB 100 GB 300 OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 21.5.5.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 21.5.5.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 21.5.5.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 21.5.5.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 21.42. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 21.43. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 21.44. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Ethernet adaptor hardware address requirements When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges: 00:05:69:00:00:00 to 00:05:69:FF:FF:FF 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF 00:50:56:00:00:00 to 00:50:56:3F:FF:FF If a MAC address outside the VMware OUI is used, the cluster installation will not succeed. NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 21.5.5.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 21.45. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 21.5.5.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 21.10. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 21.11. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 21.5.5.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 21.46. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 21.47. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 21.5.5.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 21.12. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 2 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 3 bind *:22623 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 4 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 21.5.6. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 21.5.7. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 21.5.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 21.5.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 21.5.10. Manually creating the installation configuration file Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 21.5.10.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 12 resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" 13 diskType: thin 14 fips: false 15 pullSecret: '{"auths": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, ( - ), and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 5 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 The fully-qualified hostname or IP address of the vCenter server. Important The Cluster Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. 8 The name of the user for accessing the server. 9 The password associated with the vSphere user. 10 The vSphere datacenter. 11 The default vSphere datastore to use. 12 Optional parameter: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. 13 Optional parameter: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 14 The vSphere disk provisioning method. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 The pull secret that you obtained from OpenShift Cluster Manager Hybrid Cloud Console . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). 21.5.10.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 21.5.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 21.5.12. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 21.5.13. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on user-provisioned infrastructure on VMware vSphere, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on vSphere hosts. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to. Optional: Update the configured virtual hardware version in the VM template, if necessary. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Important It is recommended that you update the hardware version of the VM template to version 15 before creating VMs from it, if necessary. Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. If your imported template defaults to hardware version 13, you must ensure that your ESXi host is on 6.7U3 or later before upgrading the VM template to hardware version 15. If your vSphere version is less than 6.7U3, you can skip this upgrade step; however, a future version of OpenShift Container Platform is scheduled to remove support for hardware version 13 and vSphere versions less than 6.7U3. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select clone options , select Customize this virtual machine's hardware . Optional: On the Customize hardware tab, click VM Options Advanced . Important The following configuration suggestions are for example purposes only. As a cluster administrator, you must configure resources according to the resource demands placed on your cluster. To best manage cluster resources, consider creating a resource pool from the cluster's root resource pool. Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Click Edit Configuration , and on the Configuration Parameters window, search the list of available parameters for steal clock accounting ( stealclock.enable ). Set the parameter to the value of TRUE . Enabling steal clock accounting can help with troubleshooting cluster issues. Click Add Configuration Params . Define the following parameter names and values: disk.EnableUUID : Specify TRUE . stealclock.enable : If this parameter was not defined, add it and specify TRUE . Create a child resource pool from the cluster's root resource pool. Perform resource allocation in this child resource pool. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the configuration and power on the VM. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 21.5.14. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 21.5.15. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.11.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 21.5.16. Updating the bootloader using bootupd To update the bootloader by using bootupd , you must either install bootupd on RHCOS machines manually or provide a machine config with the enabled systemd unit. Unlike grubby or other bootloader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. After you have installed bootupd , you can manage it remotely from the OpenShift Container Platform cluster. Note It is recommended that you use bootupd only on bare metal or virtualized hypervisor installations, such as for protection against the BootHole vulnerability. Manual install method You can manually install bootupd by using the bootctl command-line tool. Inspect the system status: # bootupctl status Example output for x86_64 Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version Example output for aarch64 Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version RHCOS images created without bootupd installed on them require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Machine config method Another way to enable bootupd is by providing a machine config. Provide a machine config file with the enabled systemd unit, as shown in the following example: Example output variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 21.5.17. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 21.5.18. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.24.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 21.5.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 21.5.20. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 21.5.21. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m Configure the Operators that are not available. 21.5.21.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . 21.5.21.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 21.5.21.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 21.5.21.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 21.5.21.2.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 21.5.22. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 21.5.23. Configuring vSphere DRS anti-affinity rules for control plane nodes vSphere Distributed Resource Scheduler (DRS) anti-affinity rules can be configured to support higher availability of OpenShift Container Platform control plane nodes. Anti-affinity rules ensure that the vSphere Virtual Machines for the OpenShift Container Platform control plane nodes are not scheduled to the same vSphere Host. Important The following information applies to compute DRS only and does not apply to storage DRS. The govc` command is an open-source command available from VMware; it is not available from Red Hat. The govc command is not supported by the Red Hat support. Instructions for downloading and installing govc are found on the VMware documentation website. Create an anti-affinity rule by running the following command: Example command USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster \ -enable \ -anti-affinity master-0 master-1 master-2 After creating the rule, your control plane nodes are automatically migrated by vSphere so they are not running on the same hosts. This might take some time while vSphere reconciles the new rule. Successful command completion is shown in the following procedure. Note The migration occurs automatically and might cause brief OpenShift API outage or latency until the migration finishes. The vSphere DRS anti-affinity rules need to be updated manually in the event of a control plane VM name change or migration to a new vSphere Cluster. Procedure Remove any existing DRS anti-affinity rule by running the following command: USD govc cluster.rule.remove \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster Example Output [13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK Create the rule again with updated names by running the following command: USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyOtherCluster \ -enable \ -anti-affinity master-0 master-1 master-2 21.5.24. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 21.5.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 21.5.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 21.6. Installing a cluster on vSphere with network customizations In OpenShift Container Platform version 4.11, you can install a cluster on VMware vSphere infrastructure that you provision with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 21.6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. Verify that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 21.6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 21.6.3. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 7 instance that meets the requirements for the components that you use. Note OpenShift Container Platform version 4.11 does not support VMware vSphere version 8.0. You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 21.48. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 15 or later vSphere ESXi hosts 7 vCenter host 7 Important Installing a cluster on VMware vSphere version 7.0 Update 1 or earlier is now deprecated. These versions are still fully supported, but version 4.11 of OpenShift Container Platform requires vSphere virtual hardware version 15 or later. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. If your vSphere nodes are below hardware version 15 or your VMware vSphere version is earlier than 6.7.3, upgrading from OpenShift Container Platform 4.10 to OpenShift Container Platform 4.11 is not available. Table 21.49. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7 with HW version 15 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7 This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7 vSphere 7 is required for OpenShift Container Platform. For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 21.6.4. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version 7.0 Update 1 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster Important If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. If you continue with the third-party vSphere CSI driver when upgrading to the major version of OpenShift Container Platform, the oc CLI prompts you with the following message: VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver The message informs you that Red Hat does not support the third-party vSphere CSI driver during an OpenShift Container Platform upgrade operation. You can choose to ignore this message and continue with the upgrade operation. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 21.6.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 21.6.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 21.50. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 21.6.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 21.51. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) [1] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [2] 2 8 GB 100 GB 300 OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 21.6.5.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 21.6.5.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 21.6.5.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 21.6.5.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 21.52. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 21.53. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 21.54. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Ethernet adaptor hardware address requirements When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges: 00:05:69:00:00:00 to 00:05:69:FF:FF:FF 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF 00:50:56:00:00:00 to 00:50:56:3F:FF:FF If a MAC address outside the VMware OUI is used, the cluster installation will not succeed. NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 21.6.5.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 21.55. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 21.6.5.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 21.13. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 21.14. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 21.6.5.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 21.56. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 21.57. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 21.6.5.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 21.15. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 2 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 3 bind *:22623 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 4 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 21.6.6. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 21.6.7. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 21.6.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 21.6.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 21.6.10. Manually creating the installation configuration file For user-provisioned installations of OpenShift Container Platform, you manually generate your installation configuration file. Important The Cluster Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 21.6.10.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 12 resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" 13 diskType: thin 14 fips: false 15 pullSecret: '{"auths": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, ( - ), and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 5 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 The fully-qualified hostname or IP address of the vCenter server. Important The Cluster Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. 8 The name of the user for accessing the server. 9 The password associated with the vSphere user. 10 The vSphere datacenter. 11 The default vSphere datastore to use. 12 Optional parameter: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. 13 Optional parameter: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 14 The vSphere disk provisioning method. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 The pull secret that you obtained from OpenShift Cluster Manager Hybrid Cloud Console . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). 21.6.10.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 21.6.11. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to Installation configuration parameters . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Important The CIDR range 172.17.0.0/16 is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2. 21.6.12. Specifying advanced network configuration You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute machineSets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 21.6.13. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 21.6.13.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 21.58. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 21.59. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 21.60. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 21.61. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 21.62. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 21.63. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 21.64. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 21.6.14. Creating the Ignition config files Because you must manually start the cluster machines, you must generate the Ignition config files that the cluster needs to make its machines. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Obtain the Ignition config files: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important If you created an install-config.yaml file, specify the directory that contains it. Otherwise, specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. The following files are generated in the directory: 21.6.15. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 21.6.16. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on user-provisioned infrastructure on VMware vSphere, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on vSphere hosts. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to. Optional: Update the configured virtual hardware version in the VM template, if necessary. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Important It is recommended that you update the hardware version of the VM template to version 15 before creating VMs from it, if necessary. Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. If your imported template defaults to hardware version 13, you must ensure that your ESXi host is on 6.7U3 or later before upgrading the VM template to hardware version 15. If your vSphere version is less than 6.7U3, you can skip this upgrade step; however, a future version of OpenShift Container Platform is scheduled to remove support for hardware version 13 and vSphere versions less than 6.7U3. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select clone options , select Customize this virtual machine's hardware . Optional: On the Customize hardware tab, click VM Options Advanced . Important The following configuration suggestions are for example purposes only. As a cluster administrator, you must configure resources according to the resource demands placed on your cluster. To best manage cluster resources, consider creating a resource pool from the cluster's root resource pool. Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Click Edit Configuration , and on the Configuration Parameters window, search the list of available parameters for steal clock accounting ( stealclock.enable ). Set the parameter to the value of TRUE . Enabling steal clock accounting can help with troubleshooting cluster issues. Click Add Configuration Params . Define the following parameter names and values: disk.EnableUUID : Specify TRUE . stealclock.enable : If this parameter was not defined, add it and specify TRUE . Create a child resource pool from the cluster's root resource pool. Perform resource allocation in this child resource pool. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the configuration and power on the VM. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 21.6.17. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 21.6.18. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.11.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 21.6.19. Updating the bootloader using bootupd To update the bootloader by using bootupd , you must either install bootupd on RHCOS machines manually or provide a machine config with the enabled systemd unit. Unlike grubby or other bootloader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. After you have installed bootupd , you can manage it remotely from the OpenShift Container Platform cluster. Note It is recommended that you use bootupd only on bare metal or virtualized hypervisor installations, such as for protection against the BootHole vulnerability. Manual install method You can manually install bootupd by using the bootctl command-line tool. Inspect the system status: # bootupctl status Example output for x86_64 Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version Example output for aarch64 Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version RHCOS images created without bootupd installed on them require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Machine config method Another way to enable bootupd is by providing a machine config. Provide a machine config file with the enabled systemd unit, as shown in the following example: Example output variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 21.6.20. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.24.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 21.6.21. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 21.6.22. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 21.6.22.1. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m Configure the Operators that are not available. 21.6.22.2. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . 21.6.22.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 21.6.22.3.1. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 21.6.23. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 21.6.24. Configuring vSphere DRS anti-affinity rules for control plane nodes vSphere Distributed Resource Scheduler (DRS) anti-affinity rules can be configured to support higher availability of OpenShift Container Platform control plane nodes. Anti-affinity rules ensure that the vSphere Virtual Machines for the OpenShift Container Platform control plane nodes are not scheduled to the same vSphere Host. Important The following information applies to compute DRS only and does not apply to storage DRS. The govc` command is an open-source command available from VMware; it is not available from Red Hat. The govc command is not supported by the Red Hat support. Instructions for downloading and installing govc are found on the VMware documentation website. Create an anti-affinity rule by running the following command: Example command USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster \ -enable \ -anti-affinity master-0 master-1 master-2 After creating the rule, your control plane nodes are automatically migrated by vSphere so they are not running on the same hosts. This might take some time while vSphere reconciles the new rule. Successful command completion is shown in the following procedure. Note The migration occurs automatically and might cause brief OpenShift API outage or latency until the migration finishes. The vSphere DRS anti-affinity rules need to be updated manually in the event of a control plane VM name change or migration to a new vSphere Cluster. Procedure Remove any existing DRS anti-affinity rule by running the following command: USD govc cluster.rule.remove \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster Example Output [13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK Create the rule again with updated names by running the following command: USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyOtherCluster \ -enable \ -anti-affinity master-0 master-1 master-2 21.6.25. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 21.6.26. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 21.6.27. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 21.7. Installing a cluster on vSphere in a restricted network In OpenShift Container Platform 4.11, you can install a cluster on VMware vSphere infrastructure in a restricted network by creating an internal mirror of the installation release content. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 21.7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide the ReadWriteMany access mode. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note If you are configuring a proxy, be sure to also review this site list. 21.7.2. About installations in restricted networks In OpenShift Container Platform 4.11, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 21.7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 21.7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 21.7.4. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 7 instance that meets the requirements for the components that you use. Note OpenShift Container Platform version 4.11 does not support VMware vSphere version 8.0. You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 21.65. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 15 or later vSphere ESXi hosts 7 vCenter host 7 Important Installing a cluster on VMware vSphere version 7.0 Update 1 or earlier is now deprecated. These versions are still fully supported, but version 4.11 of OpenShift Container Platform requires vSphere virtual hardware version 15 or later. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. If your vSphere nodes are below hardware version 15 or your VMware vSphere version is earlier than 6.7.3, upgrading from OpenShift Container Platform 4.10 to OpenShift Container Platform 4.11 is not available. Table 21.66. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7 with HW version 15 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7 This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7 vSphere 7 is required for OpenShift Container Platform. For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 21.7.5. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 21.67. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 21.68. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 21.69. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 21.7.6. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version 7.0 Update 1 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster Important If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. If you continue with the third-party vSphere CSI driver when upgrading to the major version of OpenShift Container Platform, the oc CLI prompts you with the following message: VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver The message informs you that Red Hat does not support the third-party vSphere CSI driver during an OpenShift Container Platform upgrade operation. You can choose to ignore this message and continue with the upgrade operation. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 21.7.7. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 21.16. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 21.17. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 21.18. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . Using Storage vMotion can cause issues and is not supported. If you are using vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use the Dynamic Host Configuration Protocol (DHCP) for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. In the DHCP lease, you must configure the DHCP to use the default gateway. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. The VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 21.70. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 21.7.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 21.7.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 21.7.10. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network VMware vSphere environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.11 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - vSphere image. Upload the image you downloaded to a location that is accessible from the bastion server. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 21.7.11. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSources values that were generated during mirror registry creation. Obtain the contents of the certificate for your mirror registry. Retrieve a Red Hat Enterprise Linux CoreOS (RHCOS) image and upload it to an accessible location. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Paste the pull secret from the Red Hat OpenShift Cluster Manager . In the install-config.yaml file, set the value of platform.vsphere.clusterOSImage to the image location or name. For example: platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 21.7.11.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 21.7.11.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 21.71. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 21.7.11.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 21.72. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 21.7.11.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 21.73. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 21.7.11.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 21.74. Additional VMware vSphere cluster parameters Parameter Description Values The fully-qualified hostname or IP address of the vCenter server. String The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String The password for the vCenter user name. String The name of the datacenter to use in the vCenter instance. String The name of the default datastore to use for provisioning volumes. String Optional. The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . Optional. The absolute path of an existing resource pool where the installer creates the virtual machines. If you do not specify a value, resources are installed in the root of the cluster /<datacenter_name>/host/<cluster_name>/Resources . String, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String The vCenter cluster to install the OpenShift Container Platform cluster in. String The virtual IP (VIP) address that you configured for control plane API access. An IP address, for example 128.0.0.1 . The virtual IP (VIP) address that you configured for cluster ingress. An IP address, for example 128.0.0.1 . Optional. The disk provisioning method. This value defaults to the vSphere default storage policy if not set. Valid values are thin , thick , or eagerZeroedThick . 21.7.11.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 21.75. Optional VMware vSphere machine pool parameters Parameter Description Values The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . The size of the disk in gigabytes. Integer The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer The size of a virtual machine's memory in megabytes. Integer 21.7.11.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 7 diskType: thin 8 network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2> 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 5 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 6 The cluster name that you specified in your DNS records. 7 Optional: Provide an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 8 The vSphere disk provisioning method. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. 10 The location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that is accessible from the bastion server. 11 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 12 Provide the contents of the certificate file that you used for your mirror registry. 13 Provide the imageContentSources section from the output of the command to mirror the repository. 21.7.11.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 21.7.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 21.7.13. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 21.7.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 21.7.15. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 21.7.16. Creating registry storage After you install the cluster, you must create storage for the Registry Operator. 21.7.16.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . 21.7.16.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 21.7.16.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 21.7.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 21.7.18. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 21.10. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 21.11. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 21.12. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment Considerations For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.1000.0.0.0:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.0168.21.2101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.1020.2.3:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.1030.2.1:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 21.7.19. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster Set up your registry and configure registry storage . 21.8. Installing a cluster on vSphere in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.11, you can install a cluster on VMware vSphere infrastructure that you provision in a restricted network. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 21.8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 21.8.2. About installations in restricted networks In OpenShift Container Platform 4.11, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 21.8.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 21.8.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 21.8.4. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 7 instance that meets the requirements for the components that you use. Note OpenShift Container Platform version 4.11 does not support VMware vSphere version 8.0. You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 21.76. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 15 or later vSphere ESXi hosts 7 vCenter host 7 Important Installing a cluster on VMware vSphere version 7.0 Update 1 or earlier is now deprecated. These versions are still fully supported, but version 4.11 of OpenShift Container Platform requires vSphere virtual hardware version 15 or later. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. If your vSphere nodes are below hardware version 15 or your VMware vSphere version is earlier than 6.7.3, upgrading from OpenShift Container Platform 4.10 to OpenShift Container Platform 4.11 is not available. Table 21.77. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7 with HW version 15 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7 This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7 vSphere 7 is required for OpenShift Container Platform. For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 21.8.5. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version 7.0 Update 1 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster Important If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. If you continue with the third-party vSphere CSI driver when upgrading to the major version of OpenShift Container Platform, the oc CLI prompts you with the following message: VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver The message informs you that Red Hat does not support the third-party vSphere CSI driver during an OpenShift Container Platform upgrade operation. You can choose to ignore this message and continue with the upgrade operation. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 21.8.6. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 21.8.6.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 21.78. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 21.8.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 21.79. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) [1] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [2] 2 8 GB 100 GB 300 OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 21.8.6.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 21.8.6.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 21.8.6.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 21.8.6.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 21.80. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 21.81. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 21.82. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Ethernet adaptor hardware address requirements When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges: 00:05:69:00:00:00 to 00:05:69:FF:FF:FF 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF 00:50:56:00:00:00 to 00:50:56:3F:FF:FF If a MAC address outside the VMware OUI is used, the cluster installation will not succeed. NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 21.8.6.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 21.83. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 21.8.6.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 21.19. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 21.20. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 21.8.6.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 21.84. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 21.85. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 21.8.6.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 21.21. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 2 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 3 bind *:22623 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 4 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 21.8.7. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 21.8.8. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 21.8.9. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 21.8.10. Manually creating the installation configuration file For user-provisioned installations of OpenShift Container Platform, you manually generate your installation configuration file. Important The Cluster Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain the imageContentSources section from the output of the command to mirror the repository. Obtain the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Unless you use a registry that RHCOS trusts by default, such as docker.io , you must provide the contents of the certificate for your mirror repository in the additionalTrustBundle section. In most cases, you must provide the certificate for your mirror. You must include the imageContentSources section from the output of the command to mirror the repository. Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 21.8.10.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 12 resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" 13 diskType: thin 14 fips: false 15 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2> 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, ( - ), and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 5 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 The fully-qualified hostname or IP address of the vCenter server. Important The Cluster Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. 8 The name of the user for accessing the server. 9 The password associated with the vSphere user. 10 The vSphere datacenter. 11 The default vSphere datastore to use. 12 Optional parameter: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. 13 Optional parameter: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 14 The vSphere disk provisioning method. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 18 Provide the contents of the certificate file that you used for your mirror registry. 19 Provide the imageContentSources section from the output of the command to mirror the repository. 21.8.10.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 21.8.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 21.8.12. Configuring chrony time service You must set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.11.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 21.8.13. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 21.8.14. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on user-provisioned infrastructure on VMware vSphere, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on vSphere hosts. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to. Optional: Update the configured virtual hardware version in the VM template, if necessary. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Important It is recommended that you update the hardware version of the VM template to version 15 before creating VMs from it, if necessary. Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. If your imported template defaults to hardware version 13, you must ensure that your ESXi host is on 6.7U3 or later before upgrading the VM template to hardware version 15. If your vSphere version is less than 6.7U3, you can skip this upgrade step; however, a future version of OpenShift Container Platform is scheduled to remove support for hardware version 13 and vSphere versions less than 6.7U3. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select clone options , select Customize this virtual machine's hardware . Optional: On the Customize hardware tab, click VM Options Advanced . Important The following configuration suggestions are for example purposes only. As a cluster administrator, you must configure resources according to the resource demands placed on your cluster. To best manage cluster resources, consider creating a resource pool from the cluster's root resource pool. Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Click Edit Configuration , and on the Configuration Parameters window, search the list of available parameters for steal clock accounting ( stealclock.enable ). Set the parameter to the value of TRUE . Enabling steal clock accounting can help with troubleshooting cluster issues. Click Add Configuration Params . Define the following parameter names and values: disk.EnableUUID : Specify TRUE . stealclock.enable : If this parameter was not defined, add it and specify TRUE . Create a child resource pool from the cluster's root resource pool. Perform resource allocation in this child resource pool. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the configuration and power on the VM. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 21.8.15. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 21.8.16. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.11.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 21.8.17. Updating the bootloader using bootupd To update the bootloader by using bootupd , you must either install bootupd on RHCOS machines manually or provide a machine config with the enabled systemd unit. Unlike grubby or other bootloader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. After you have installed bootupd , you can manage it remotely from the OpenShift Container Platform cluster. Note It is recommended that you use bootupd only on bare metal or virtualized hypervisor installations, such as for protection against the BootHole vulnerability. Manual install method You can manually install bootupd by using the bootctl command-line tool. Inspect the system status: # bootupctl status Example output for x86_64 Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version Example output for aarch64 Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version RHCOS images created without bootupd installed on them require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Machine config method Another way to enable bootupd is by providing a machine config. Provide a machine config file with the enabled systemd unit, as shown in the following example: Example output variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 21.8.18. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.24.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 21.8.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 21.8.20. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 21.8.21. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m Configure the Operators that are not available. 21.8.21.1. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 21.8.21.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 21.8.21.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 21.8.21.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 21.8.21.2.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 21.8.22. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 21.8.23. Configuring vSphere DRS anti-affinity rules for control plane nodes vSphere Distributed Resource Scheduler (DRS) anti-affinity rules can be configured to support higher availability of OpenShift Container Platform control plane nodes. Anti-affinity rules ensure that the vSphere Virtual Machines for the OpenShift Container Platform control plane nodes are not scheduled to the same vSphere Host. Important The following information applies to compute DRS only and does not apply to storage DRS. The govc` command is an open-source command available from VMware; it is not available from Red Hat. The govc command is not supported by the Red Hat support. Instructions for downloading and installing govc are found on the VMware documentation website. Create an anti-affinity rule by running the following command: Example command USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster \ -enable \ -anti-affinity master-0 master-1 master-2 After creating the rule, your control plane nodes are automatically migrated by vSphere so they are not running on the same hosts. This might take some time while vSphere reconciles the new rule. Successful command completion is shown in the following procedure. Note The migration occurs automatically and might cause brief OpenShift API outage or latency until the migration finishes. The vSphere DRS anti-affinity rules need to be updated manually in the event of a control plane VM name change or migration to a new vSphere Cluster. Procedure Remove any existing DRS anti-affinity rule by running the following command: USD govc cluster.rule.remove \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster Example Output [13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK Create the rule again with updated names by running the following command: USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyOtherCluster \ -enable \ -anti-affinity master-0 master-1 master-2 21.8.24. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 21.8.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 21.8.26. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 21.9. Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure You can remove a cluster that you deployed in your VMware vSphere instance by using installer-provisioned infrastructure. Note When you run the openshift-install destroy cluster command to uninstall OpenShift Container Platform, vSphere volumes are not automatically deleted. The cluster administrator must manually find the vSphere volumes and delete them. 21.9.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure On the computer that you used to install the cluster, go to the directory that contains the installation program, and run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 21.10. Using the vSphere Problem Detector Operator 21.10.1. About the vSphere Problem Detector Operator The vSphere Problem Detector Operator checks clusters that are deployed on vSphere for common installation and misconfiguration issues that are related to storage. The Operator runs in the openshift-cluster-storage-operator namespace and is started by the Cluster Storage Operator when the Cluster Storage Operator detects that the cluster is deployed on vSphere. The vSphere Problem Detector Operator communicates with the vSphere vCenter Server to determine the virtual machines in the cluster, the default datastore, and other information about the vSphere vCenter Server configuration. The Operator uses the credentials from the Cloud Credential Operator to connect to vSphere. The Operator runs the checks according to the following schedule: The checks run every 8 hours. If any check fails, the Operator runs the checks again in intervals of 1 minute, 2 minutes, 4, 8, and so on. The Operator doubles the interval up to a maximum interval of 8 hours. When all checks pass, the schedule returns to an 8 hour interval. The Operator increases the frequency of the checks after a failure so that the Operator can report success quickly after the failure condition is remedied. You can run the Operator manually for immediate troubleshooting information. 21.10.2. Running the vSphere Problem Detector Operator checks You can override the schedule for running the vSphere Problem Detector Operator checks and run the checks immediately. The vSphere Problem Detector Operator automatically runs the checks every 8 hours. However, when the Operator starts, it runs the checks immediately. The Operator is started by the Cluster Storage Operator when the Cluster Storage Operator starts and determines that the cluster is running on vSphere. To run the checks immediately, you can scale the vSphere Problem Detector Operator to 0 and back to 1 so that it restarts the vSphere Problem Detector Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Scale the Operator to 0 : USD oc scale deployment/vsphere-problem-detector-operator --replicas=0 \ -n openshift-cluster-storage-operator If the deployment does not scale to zero immediately, you can run the following command to wait for the pods to exit: USD oc wait pods -l name=vsphere-problem-detector-operator \ --for=delete --timeout=5m -n openshift-cluster-storage-operator Scale the Operator back to 1 : USD oc scale deployment/vsphere-problem-detector-operator --replicas=1 \ -n openshift-cluster-storage-operator Delete the old leader lock to speed up the new leader election for the Cluster Storage Operator: USD oc delete -n openshift-cluster-storage-operator \ cm vsphere-problem-detector-lock Verification View the events or logs that are generated by the vSphere Problem Detector Operator. Confirm that the events or logs have recent timestamps. 21.10.3. Viewing the events from the vSphere Problem Detector Operator After the vSphere Problem Detector Operator runs and performs the configuration checks, it creates events that can be viewed from the command line or from the OpenShift Container Platform web console. Procedure To view the events by using the command line, run the following command: USD oc get event -n openshift-cluster-storage-operator \ --sort-by={.metadata.creationTimestamp} Example output 16m Normal Started pod/vsphere-problem-detector-operator-xxxxx Started container vsphere-problem-detector 16m Normal Created pod/vsphere-problem-detector-operator-xxxxx Created container vsphere-problem-detector 16m Normal LeaderElection configmap/vsphere-problem-detector-lock vsphere-problem-detector-operator-xxxxx became leader To view the events by using the OpenShift Container Platform web console, navigate to Home Events and select openshift-cluster-storage-operator from the Project menu. 21.10.4. Viewing the logs from the vSphere Problem Detector Operator After the vSphere Problem Detector Operator runs and performs the configuration checks, it creates log records that can be viewed from the command line or from the OpenShift Container Platform web console. Procedure To view the logs by using the command line, run the following command: USD oc logs deployment/vsphere-problem-detector-operator \ -n openshift-cluster-storage-operator Example output I0108 08:32:28.445696 1 operator.go:209] ClusterInfo passed I0108 08:32:28.451029 1 datastore.go:57] CheckStorageClasses checked 1 storage classes, 0 problems found I0108 08:32:28.451047 1 operator.go:209] CheckStorageClasses passed I0108 08:32:28.452160 1 operator.go:209] CheckDefaultDatastore passed I0108 08:32:28.480648 1 operator.go:271] CheckNodeDiskUUID:<host_name> passed I0108 08:32:28.480685 1 operator.go:271] CheckNodeProviderID:<host_name> passed To view the Operator logs with the OpenShift Container Platform web console, perform the following steps: Navigate to Workloads Pods . Select openshift-cluster-storage-operator from the Projects menu. Click the link for the vsphere-problem-detector-operator pod. Click the Logs tab on the Pod details page to view the logs. 21.10.5. Configuration checks run by the vSphere Problem Detector Operator The following tables identify the configuration checks that the vSphere Problem Detector Operator runs. Some checks verify the configuration of the cluster. Other checks verify the configuration of each node in the cluster. Table 21.86. Cluster configuration checks Name Description CheckDefaultDatastore Verifies that the default datastore name in the vSphere configuration is short enough for use with dynamic provisioning. If this check fails, you can expect the following: systemd logs errors to the journal such as Failed to set up mount unit: Invalid argument . systemd does not unmount volumes if the virtual machine is shut down or rebooted without draining all the pods from the node. If this check fails, reconfigure vSphere with a shorter name for the default datastore. CheckFolderPermissions Verifies the permission to list volumes in the default datastore. This permission is required to create volumes. The Operator verifies the permission by listing the / and /kubevols directories. The root directory must exist. It is acceptable if the /kubevols directory does not exist when the check runs. The /kubevols directory is created when the datastore is used with dynamic provisioning if the directory does not already exist. If this check fails, review the required permissions for the vCenter account that was specified during the OpenShift Container Platform installation. CheckStorageClasses Verifies the following: The fully qualified path to each persistent volume that is provisioned by this storage class is less than 255 characters. If a storage class uses a storage policy, the storage class must use one policy only and that policy must be defined. CheckTaskPermissions Verifies the permission to list recent tasks and datastores. ClusterInfo Collects the cluster version and UUID from vSphere vCenter. Table 21.87. Node configuration checks Name Description CheckNodeDiskUUID Verifies that all the vSphere virtual machines are configured with disk.enableUUID=TRUE . If this check fails, see the How to check 'disk.EnableUUID' parameter from VM in vSphere Red Hat Knowledgebase solution. CheckNodeProviderID Verifies that all nodes are configured with the ProviderID from vSphere vCenter. This check fails when the output from the following command does not include a provider ID for each node. USD oc get nodes -o custom-columns=NAME:.metadata.name,PROVIDER_ID:.spec.providerID,UUID:.status.nodeInfo.systemUUID If this check fails, refer to the vSphere product documentation for information about setting the provider ID for each node in the cluster. CollectNodeESXiVersion Reports the version of the ESXi hosts that run nodes. CollectNodeHWVersion Reports the virtual machine hardware version for a node. 21.10.6. About the storage class configuration check The names for persistent volumes that use vSphere storage are related to the datastore name and cluster ID. When a persistent volume is created, systemd creates a mount unit for the persistent volume. The systemd process has a 255 character limit for the length of the fully qualified path to the VDMK file that is used for the persistent volume. The fully qualified path is based on the naming conventions for systemd and vSphere. The naming conventions use the following pattern: /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[<datastore>] 00000000-0000-0000-0000-000000000000/<cluster_id>-dynamic-pvc-00000000-0000-0000-0000-000000000000.vmdk The naming conventions require 205 characters of the 255 character limit. The datastore name and the cluster ID are determined from the deployment. The datastore name and cluster ID are substituted into the preceding pattern. Then the path is processed with the systemd-escape command to escape special characters. For example, a hyphen character uses four characters after it is escaped. The escaped value is \x2d . After processing with systemd-escape to ensure that systemd can access the fully qualified path to the VDMK file, the length of the path must be less than 255 characters. 21.10.7. Metrics for the vSphere Problem Detector Operator The vSphere Problem Detector Operator exposes the following metrics for use by the OpenShift Container Platform monitoring stack. Table 21.88. Metrics exposed by the vSphere Problem Detector Operator Name Description vsphere_cluster_check_total Cumulative number of cluster-level checks that the vSphere Problem Detector Operator performed. This count includes both successes and failures. vsphere_cluster_check_errors Number of failed cluster-level checks that the vSphere Problem Detector Operator performed. For example, a value of 1 indicates that one cluster-level check failed. vsphere_esxi_version_total Number of ESXi hosts with a specific version. Be aware that if a host runs more than one node, the host is counted only once. vsphere_node_check_total Cumulative number of node-level checks that the vSphere Problem Detector Operator performed. This count includes both successes and failures. vsphere_node_check_errors Number of failed node-level checks that the vSphere Problem Detector Operator performed. For example, a value of 1 indicates that one node-level check failed. vsphere_node_hw_version_total Number of vSphere nodes with a specific hardware version. vsphere_vcenter_info Information about the vSphere vCenter Server. 21.10.8. Additional resources Monitoring overview | [
"VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver",
"VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.1000.0.0.0:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.0168.21.2101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.1020.2.3:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.1030.2.1:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"platform: vsphere vCenter",
"platform: vsphere username",
"platform: vsphere password",
"platform: vsphere datacenter",
"platform: vsphere defaultDatastore",
"platform: vsphere folder",
"platform: vsphere resourcePool",
"platform: vsphere network",
"platform: vsphere cluster",
"platform: vsphere apiVIP",
"platform: vsphere ingressVIP",
"platform: vsphere diskType",
"platform: vsphere clusterOSImage",
"platform vsphere osDisk diskSizeGB",
"platform vsphere cpus",
"platform vsphere coresPerSocket",
"platform vsphere memoryMB",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 7 diskType: thin 8 network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.1000.0.0.0:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.0168.21.2101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.1020.2.3:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.1030.2.1:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"platform: vsphere vCenter",
"platform: vsphere username",
"platform: vsphere password",
"platform: vsphere datacenter",
"platform: vsphere defaultDatastore",
"platform: vsphere folder",
"platform: vsphere resourcePool",
"platform: vsphere network",
"platform: vsphere cluster",
"platform: vsphere apiVIP",
"platform: vsphere ingressVIP",
"platform: vsphere diskType",
"platform: vsphere clusterOSImage",
"platform vsphere osDisk diskSizeGB",
"platform vsphere cpus",
"platform vsphere coresPerSocket",
"platform vsphere memoryMB",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 7 diskType: thin 8 network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.1000.0.0.0:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.0168.21.2101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.1020.2.3:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.1030.2.1:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 2 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 3 bind *:22623 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 4 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 12 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 13 diskType: thin 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.11.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version",
"Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.24.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2",
"VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 2 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 3 bind *:22623 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 4 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 12 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 13 diskType: thin 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.11.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version",
"Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.24.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2",
"VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"platform: vsphere vCenter",
"platform: vsphere username",
"platform: vsphere password",
"platform: vsphere datacenter",
"platform: vsphere defaultDatastore",
"platform: vsphere folder",
"platform: vsphere resourcePool",
"platform: vsphere network",
"platform: vsphere cluster",
"platform: vsphere apiVIP",
"platform: vsphere ingressVIP",
"platform: vsphere diskType",
"platform: vsphere clusterOSImage",
"platform vsphere osDisk diskSizeGB",
"platform vsphere cpus",
"platform vsphere coresPerSocket",
"platform vsphere memoryMB",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 7 diskType: thin 8 network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.1000.0.0.0:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.0168.21.2101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.1020.2.3:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.1030.2.1:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 2 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 3 bind *:22623 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 4 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 12 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 13 diskType: thin 14 fips: false 15 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.11.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.11.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version",
"Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.24.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"oc scale deployment/vsphere-problem-detector-operator --replicas=0 -n openshift-cluster-storage-operator",
"oc wait pods -l name=vsphere-problem-detector-operator --for=delete --timeout=5m -n openshift-cluster-storage-operator",
"oc scale deployment/vsphere-problem-detector-operator --replicas=1 -n openshift-cluster-storage-operator",
"oc delete -n openshift-cluster-storage-operator cm vsphere-problem-detector-lock",
"oc get event -n openshift-cluster-storage-operator --sort-by={.metadata.creationTimestamp}",
"16m Normal Started pod/vsphere-problem-detector-operator-xxxxx Started container vsphere-problem-detector 16m Normal Created pod/vsphere-problem-detector-operator-xxxxx Created container vsphere-problem-detector 16m Normal LeaderElection configmap/vsphere-problem-detector-lock vsphere-problem-detector-operator-xxxxx became leader",
"oc logs deployment/vsphere-problem-detector-operator -n openshift-cluster-storage-operator",
"I0108 08:32:28.445696 1 operator.go:209] ClusterInfo passed I0108 08:32:28.451029 1 datastore.go:57] CheckStorageClasses checked 1 storage classes, 0 problems found I0108 08:32:28.451047 1 operator.go:209] CheckStorageClasses passed I0108 08:32:28.452160 1 operator.go:209] CheckDefaultDatastore passed I0108 08:32:28.480648 1 operator.go:271] CheckNodeDiskUUID:<host_name> passed I0108 08:32:28.480685 1 operator.go:271] CheckNodeProviderID:<host_name> passed",
"oc get nodes -o custom-columns=NAME:.metadata.name,PROVIDER_ID:.spec.providerID,UUID:.status.nodeInfo.systemUUID",
"/var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[<datastore>] 00000000-0000-0000-0000-000000000000/<cluster_id>-dynamic-pvc-00000000-0000-0000-0000-000000000000.vmdk"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/installing/installing-on-vsphere |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/planning_your_deployment/providing-feedback-on-red-hat-documentation_rhodf |
Chapter 19. Red Hat Enterprise Linux Atomic Host 7.5.4 | Chapter 19. Red Hat Enterprise Linux Atomic Host 7.5.4 19.1. Atomic Host OStree update : New Tree Version: 7.5.4 (hash: 519fd3f7efdfa5d0f6ecb0ab3cba84f95dbfa6b59e8a7176f3158adfaaa78334) Changes since Tree Version 7.5.3 (hash: 03d524a16c8d76897f097565ca7452c1a5e2541f8c2beab145adf622499c7c64) Updated packages : cockpit-ostree-176-1.el7 19.2. Extras Updated packages : dpdk-17.11-13.el7 cockpit-176-2.el7 atomic-1.22.1-25.git5a342e3.el7 podman-0.9.2-5.git37a2afe.el7_5 docker-1.13.1-75.git8633870.el7_5 runc-1.0.0-52.dev.git70ca035.el7_5 19.2.1. Container Images Updated : Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) Red Hat Enterprise Linux Container Image (rhel7.5, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux Atomic Image (rhel-atomic, rhel7-atomic, rhel7/rhel-atomic) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux Atomic Net-SNMP Container Image (rhel7/net-snmp) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic Support Tools Container Image (rhel7/support-tools) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux Atomic openscap Container Image (rhel7/openscap) 19.3. New Features Buildah is now part of the default install With RHEL Atomic Host 7.5.4, Buildah is part of the default installation. You no longer need to install it using package layering. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_5_4 |
Chapter 5. Developing Operators | Chapter 5. Developing Operators 5.1. About the Operator SDK The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators , in an effective, automated, and scalable way. Operators take advantage of Kubernetes extensibility to deliver the automation advantages of cloud services, like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run. Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication. The Operator SDK, a component of the Operator Framework, provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.17 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . Why use the Operator SDK? The Operator SDK simplifies this process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. The Operator SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code required for many common management capabilities, such as metering or monitoring. The Operator SDK is a framework that uses the controller-runtime library to make writing Operators easier by providing the following features: High-level APIs and abstractions to write the operational logic more intuitively Tools for scaffolding and code generation to quickly bootstrap a new project Integration with Operator Lifecycle Manager (OLM) to streamline packaging, installing, and running Operators on a cluster Extensions to cover common Operator use cases Metrics set up automatically in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. Note OpenShift Container Platform 4.17 supports Operator SDK 1.36.1. 5.1.1. What are Operators? For an overview about basic Operator concepts and terminology, see Understanding Operators . 5.1.2. Development workflow The Operator SDK provides the following workflow to develop a new Operator: Create an Operator project by using the Operator SDK command-line interface (CLI). Define new resource APIs by adding custom resource definitions (CRDs). Specify resources to watch by using the Operator SDK API. Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources. Use the Operator SDK CLI to build and generate the Operator deployment manifests. Figure 5.1. Operator SDK workflow At a high level, an Operator that uses the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application. 5.1.3. Additional resources Certified Operator Build Guide 5.2. Installing the Operator SDK CLI The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.17 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . Operator authors with cluster administrator access to a Kubernetes-based cluster, such as OpenShift Container Platform, can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. Note OpenShift Container Platform 4.17 supports Operator SDK 1.36.1. 5.2.1. Installing the Operator SDK CLI on Linux You can install the OpenShift SDK CLI tool on Linux. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure Navigate to the OpenShift mirror site . From the latest 4.17 directory, download the latest version of the tarball for Linux. Unpack the archive: USD tar xvf operator-sdk-v1.36.1-ocp-linux-x86_64.tar.gz Make the file executable: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH . Tip To check your PATH : USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available: USD operator-sdk version Example output operator-sdk version: "v1.36.1-ocp", ... 5.2.2. Installing the Operator SDK CLI on macOS You can install the OpenShift SDK CLI tool on macOS. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure For the amd64 and arm64 architectures, navigate to the OpenShift mirror site for the amd64 architecture and OpenShift mirror site for the arm64 architecture respectively. From the latest 4.17 directory, download the latest version of the tarball for macOS. Unpack the Operator SDK archive for amd64 architecture by running the following command: USD tar xvf operator-sdk-v1.36.1-ocp-darwin-x86_64.tar.gz Unpack the Operator SDK archive for arm64 architecture by running the following command: USD tar xvf operator-sdk-v1.36.1-ocp-darwin-aarch64.tar.gz Make the file executable by running the following command: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH by running the following command: Tip Check your PATH by running the following command: USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available by running the following command:: USD operator-sdk version Example output operator-sdk version: "v1.36.1-ocp", ... 5.3. Go-based Operators 5.3.1. Getting started with Operator SDK for Go-based Operators To demonstrate the basics of setting up and running a Go-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Go-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.17 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . 5.3.1.1. Prerequisites Operator SDK CLI installed OpenShift CLI ( oc ) 4.17+ installed Go 1.21+ Logged into an OpenShift Container Platform 4.17 cluster with oc with an account that has cluster-admin permissions To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret Additional resources Installing the Operator SDK CLI Getting started with the OpenShift CLI 5.3.1.2. Creating and deploying Go-based Operators You can build and deploy a simple Go-based Operator for Memcached by using the Operator SDK. Procedure Create a project. Create your project directory: USD mkdir memcached-operator Change into the project directory: USD cd memcached-operator Run the operator-sdk init command to initialize the project: USD operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator The command uses the Go plugin by default. Create an API. Create a simple Memcached API: USD operator-sdk create api \ --resource=true \ --controller=true \ --group cache \ --version v1 \ --kind Memcached Build and push the Operator image. Use the default Makefile targets to build and push your Operator. Set IMG with a pull spec for your image that uses a registry you can push to: USD make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag> Run the Operator. Install the CRD: USD make install Deploy the project to the cluster. Set IMG to the image that you pushed: USD make deploy IMG=<registry>/<user>/<image_name>:<tag> Create a sample custom resource (CR). Create a sample CR: USD oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-system Watch for the CR to reconcile the Operator: USD oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system Delete a CR. Delete a CR by running the following command: USD oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system Clean up. Run the following command to clean up the resources that have been created as part of this procedure: USD make undeploy 5.3.1.3. steps See Operator SDK tutorial for Go-based Operators for a more in-depth walkthrough on building a Go-based Operator. 5.3.2. Operator SDK tutorial for Go-based Operators Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.17 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . This process is accomplished using two centerpieces of the Operator Framework: Operator SDK The operator-sdk CLI tool and controller-runtime library API Operator Lifecycle Manager (OLM) Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster Note This tutorial goes into greater detail than Getting started with Operator SDK for Go-based Operators . 5.3.2.1. Prerequisites Operator SDK CLI installed OpenShift CLI ( oc ) 4.17+ installed Go 1.21+ Logged into an OpenShift Container Platform 4.17 cluster with oc with an account that has cluster-admin permissions To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret Additional resources Installing the Operator SDK CLI Getting started with the OpenShift CLI 5.3.2.2. Creating a project Use the Operator SDK CLI to create a project called memcached-operator . Procedure Create a directory for the project: USD mkdir -p USDHOME/projects/memcached-operator Change to the directory: USD cd USDHOME/projects/memcached-operator Activate support for Go modules: USD export GO111MODULE=on Run the operator-sdk init command to initialize the project: USD operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator Note The operator-sdk init command uses the Go plugin by default. The operator-sdk init command generates a go.mod file to be used with Go modules . The --repo flag is required when creating a project outside of USDGOPATH/src/ , because generated files require a valid module path. 5.3.2.2.1. PROJECT file Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Go. For example: domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: "3" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} 5.3.2.2.2. About the Manager The main program for the Operator is the main.go file, which initializes and runs the Manager . The Manager automatically registers the Scheme for all custom resource (CR) API definitions and sets up and runs controllers and webhooks. The Manager can restrict the namespace that all controllers watch for resources: mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace}) By default, the Manager watches the namespace where the Operator runs. To watch all namespaces, you can leave the namespace option empty: mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""}) You can also use the MultiNamespacedCacheBuilder function to watch a specific set of namespaces: var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), }) 1 List of namespaces. 2 Creates a Cmd struct to provide shared dependencies and start components. 5.3.2.2.3. About multi-group APIs Before you create an API and controller, consider whether your Operator requires multiple API groups. This tutorial covers the default case of a single group API, but to change the layout of your project to support multi-group APIs, you can run the following command: USD operator-sdk edit --multigroup=true This command updates the PROJECT file, which should look like the following example: domain: example.com layout: go.kubebuilder.io/v3 multigroup: true ... For multi-group projects, the API Go type files are created in the apis/<group>/<version>/ directory, and the controllers are created in the controllers/<group>/ directory. The Dockerfile is then updated accordingly. Additional resource For more details on migrating to a multi-group project, see the Kubebuilder documentation . 5.3.2.3. Creating an API and controller Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller. Procedure Run the following command to create an API with group cache , version, v1 , and kind Memcached : USD operator-sdk create api \ --group=cache \ --version=v1 \ --kind=Memcached When prompted, enter y for creating both the resource and controller: Create Resource [y/n] y Create Controller [y/n] y Example output Writing scaffold for you to edit... api/v1/memcached_types.go controllers/memcached_controller.go ... This process generates the Memcached resource API at api/v1/memcached_types.go and the controller at controllers/memcached_controller.go . 5.3.2.3.1. Defining the API Define the API for the Memcached custom resource (CR). Procedure Modify the Go type definitions at api/v1/memcached_types.go to have the following spec and status : // MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:"size"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:"nodes"` } Update the generated code for the resource type: USD make generate Tip After you modify a *_types.go file, you must run the make generate command to update the generated code for that resource type. The above Makefile target invokes the controller-gen utility to update the api/v1/zz_generated.deepcopy.go file. This ensures your API Go type definitions implement the runtime.Object interface that all Kind types must implement. 5.3.2.3.2. Generating CRD manifests After the API is defined with spec and status fields and custom resource definition (CRD) validation markers, you can generate CRD manifests. Procedure Run the following command to generate and update CRD manifests: USD make manifests This Makefile target invokes the controller-gen utility to generate the CRD manifests in the config/crd/bases/cache.example.com_memcacheds.yaml file. 5.3.2.3.2.1. About OpenAPI validation OpenAPIv3 schemas are added to CRD manifests in the spec.validation block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached custom resource (CR) when it is created or updated. Markers, or annotations, are available to configure validations for your API. These markers always have a +kubebuilder:validation prefix. Additional resources For more details on the usage of markers in API code, see the following Kubebuilder documentation: CRD generation Markers List of OpenAPIv3 validation markers For more details about OpenAPIv3 validation schemas in CRDs, see the Kubernetes documentation . 5.3.2.4. Implementing the controller After creating a new API and controller, you can implement the controller logic. Procedure For this example, replace the generated controller file controllers/memcached_controller.go with following example implementation: Example 5.1. Example memcached_controller.go /* | [
"tar xvf operator-sdk-v1.36.1-ocp-linux-x86_64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.36.1-ocp\",",
"tar xvf operator-sdk-v1.36.1-ocp-darwin-x86_64.tar.gz",
"tar xvf operator-sdk-v1.36.1-ocp-darwin-aarch64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.36.1-ocp\",",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"operator-sdk create api --resource=true --controller=true --group cache --version v1 --kind Memcached",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"export GO111MODULE=on",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: \"3\" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {}",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: \"\"})",
"var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })",
"operator-sdk edit --multigroup=true",
"domain: example.com layout: go.kubebuilder.io/v3 multigroup: true",
"operator-sdk create api --group=cache --version=v1 --kind=Memcached",
"Create Resource [y/n] y Create Controller [y/n] y",
"Writing scaffold for you to edit api/v1/memcached_types.go controllers/memcached_controller.go",
"// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 \"k8s.io/api/apps/v1\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"reflect\" \"context\" \"github.com/go-logr/logr\" \"k8s.io/apimachinery/pkg/runtime\" ctrl \"sigs.k8s.io/controller-runtime\" \"sigs.k8s.io/controller-runtime/pkg/client\" ctrllog \"sigs.k8s.io/controller-runtime/pkg/log\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues(\"memcached\", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info(\"Memcached resource not found. Ignoring since object must be deleted\") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, \"Failed to get Memcached\") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info(\"Creating a new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, \"Failed to create new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, \"Failed to get Deployment\") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, \"Failed to update Deployment\", \"Deployment.Namespace\", found.Namespace, \"Deployment.Name\", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, \"Failed to list pods\", \"Memcached.Namespace\", memcached.Namespace, \"Memcached.Name\", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, \"Failed to update Memcached status\") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: \"memcached:1.4.36-alpine\", Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: \"memcached\", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{\"app\": \"memcached\", \"memcached_cr\": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"import ( appsv1 \"k8s.io/api/apps/v1\" ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }",
"import ( ctrl \"sigs.k8s.io/controller-runtime\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) }",
"// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil",
"import \"time\" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil",
"// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { }",
"import ( \"github.com/operator-framework/operator-lib/proxy\" )",
"for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) }",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {\"addr\": \":8080\"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {\"path\": \"/metrics\"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"source\": \"kind source: /, Kind=\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"worker count\": 1}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp",
"containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.17",
"docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder",
"k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3",
"go mod tidy",
"+ .PHONY: build-installer + build-installer: manifests generate kustomize ## Generate a consolidated YAML with CRDs and deployment. + mkdir -p dist + cd config/manager && USD(KUSTOMIZE) edit set image controller=USD{IMG} + USD(KUSTOMIZE) build config/default > dist/install.yaml",
"- ENVTEST_K8S_VERSION = 1.28.3 + ENVTEST_K8S_VERSION = 1.29.0",
"- GOLANGCI_LINT = USD(shell pwd)/bin/golangci-lint - GOLANGCI_LINT_VERSION ?= v1.54.2 - golangci-lint: - @[ -f USD(GOLANGCI_LINT) ] || { - set -e ; - curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b USD(shell dirname USD(GOLANGCI_LINT)) USD(GOLANGCI_LINT_VERSION) ; - }",
"- ## Tool Binaries - KUBECTL ?= kubectl - KUSTOMIZE ?= USD(LOCALBIN)/kustomize - CONTROLLER_GEN ?= USD(LOCALBIN)/controller-gen - ENVTEST ?= USD(LOCALBIN)/setup-envtest - - ## Tool Versions - KUSTOMIZE_VERSION ?= v5.2.1 - CONTROLLER_TOOLS_VERSION ?= v0.13.0 - - .PHONY: kustomize - kustomize: USD(KUSTOMIZE) ## Download kustomize locally if necessary. If wrong version is installed, it will be removed before downloading. - USD(KUSTOMIZE): USD(LOCALBIN) - @if test -x USD(LOCALBIN)/kustomize && ! USD(LOCALBIN)/kustomize version | grep -q USD(KUSTOMIZE_VERSION); then - echo \"USD(LOCALBIN)/kustomize version is not expected USD(KUSTOMIZE_VERSION). Removing it before installing.\"; - rm -rf USD(LOCALBIN)/kustomize; - fi - test -s USD(LOCALBIN)/kustomize || GOBIN=USD(LOCALBIN) GO111MODULE=on go install sigs.k8s.io/kustomize/kustomize/v5@USD(KUSTOMIZE_VERSION) - - .PHONY: controller-gen - controller-gen: USD(CONTROLLER_GEN) ## Download controller-gen locally if necessary. If wrong version is installed, it will be overwritten. - USD(CONTROLLER_GEN): USD(LOCALBIN) - test -s USD(LOCALBIN)/controller-gen && USD(LOCALBIN)/controller-gen --version | grep -q USD(CONTROLLER_TOOLS_VERSION) || - GOBIN=USD(LOCALBIN) go install sigs.k8s.io/controller-tools/cmd/controller-gen@USD(CONTROLLER_TOOLS_VERSION) - - .PHONY: envtest - envtest: USD(ENVTEST) ## Download envtest-setup locally if necessary. - USD(ENVTEST): USD(LOCALBIN) - test -s USD(LOCALBIN)/setup-envtest || GOBIN=USD(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest + ## Tool Binaries + KUBECTL ?= kubectl + KUSTOMIZE ?= USD(LOCALBIN)/kustomize-USD(KUSTOMIZE_VERSION) + CONTROLLER_GEN ?= USD(LOCALBIN)/controller-gen-USD(CONTROLLER_TOOLS_VERSION) + ENVTEST ?= USD(LOCALBIN)/setup-envtest-USD(ENVTEST_VERSION) + GOLANGCI_LINT = USD(LOCALBIN)/golangci-lint-USD(GOLANGCI_LINT_VERSION) + + ## Tool Versions + KUSTOMIZE_VERSION ?= v5.3.0 + CONTROLLER_TOOLS_VERSION ?= v0.14.0 + ENVTEST_VERSION ?= release-0.17 + GOLANGCI_LINT_VERSION ?= v1.57.2 + + .PHONY: kustomize + kustomize: USD(KUSTOMIZE) ## Download kustomize locally if necessary. + USD(KUSTOMIZE): USD(LOCALBIN) + USD(call go-install-tool,USD(KUSTOMIZE),sigs.k8s.io/kustomize/kustomize/v5,USD(KUSTOMIZE_VERSION)) + + .PHONY: controller-gen + controller-gen: USD(CONTROLLER_GEN) ## Download controller-gen locally if necessary. + USD(CONTROLLER_GEN): USD(LOCALBIN) + USD(call go-install-tool,USD(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen,USD(CONTROLLER_TOOLS_VERSION)) + + .PHONY: envtest + envtest: USD(ENVTEST) ## Download setup-envtest locally if necessary. + USD(ENVTEST): USD(LOCALBIN) + USD(call go-install-tool,USD(ENVTEST),sigs.k8s.io/controller-runtime/tools/setup-envtest,USD(ENVTEST_VERSION)) + + .PHONY: golangci-lint + golangci-lint: USD(GOLANGCI_LINT) ## Download golangci-lint locally if necessary. + USD(GOLANGCI_LINT): USD(LOCALBIN) + USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD{GOLANGCI_LINT_VERSION}) + + # go-install-tool will 'go install' any package with custom target and name of binary, if it doesn't exist + # USD1 - target path with name of binary (ideally with version) + # USD2 - package url which can be installed + # USD3 - specific version of package + define go-install-tool + @[ -f USD(1) ] || { + set -e; + package=USD(2)@USD(3) ; + echo \"Downloading USDUSD{package}\" ; + GOBIN=USD(LOCALBIN) go install USDUSD{package} ; + mv \"USDUSD(echo \"USD(1)\" | sed \"s/-USD(3)USDUSD//\")\" USD(1) ; + } + endef",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612547325.8819902,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612547325.98242,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612547325.9824686,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4} {\"level\":\"info\",\"ts\":1612547348.8311093,\"logger\":\"runner\",\"msg\":\"Ansible-runner exited successfully\",\"job\":\"4037200794235010051\",\"name\":\"memcached-sample\",\"namespace\":\"memcached-operator-system\"}",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"domain: example.com layout: - ansible.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: memcached-operator version: \"3\"",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211",
"--- defaults file for Memcached size: 1",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3",
"env: - name: HTTP_PROXY value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}' - name: http_proxy value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}'",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612589622.7888272,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612589622.7897573,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612589622.789971,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612589622.7899997,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612589622.8904517,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612589622.8905244,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp",
"containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.17",
"FROM registry.redhat.io/openshift4/ose-ansible-rhel9-operator:v4.17",
"docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder",
"k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3",
"go mod tidy",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false",
"- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False",
"apiVersion: \"app.example.com/v1alpha1\" kind: \"Database\" metadata: name: \"example\" spec: message: \"Hello world 2\" newParameter: \"newParam\"",
"{ \"meta\": { \"name\": \"<cr_name>\", \"namespace\": \"<cr_namespace>\", }, \"message\": \"Hello world 2\", \"new_parameter\": \"newParam\", \"_app_example_com_database\": { <full_crd> }, }",
"--- - debug: msg: \"name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}\"",
"sudo dnf install ansible",
"pip install kubernetes",
"ansible-galaxy collection install community.kubernetes",
"ansible-galaxy collection install -r requirements.yml",
"--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: <operator_namespace> 1 state: \"{{ state }}\" ignore_errors: true 2",
"--- state: present",
"--- - hosts: localhost roles: - <kind>",
"ansible-playbook playbook.yml",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"NAME DATA AGE example-config 0 2m1s",
"ansible-playbook playbook.yml --extra-vars state=absent",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"make install",
"/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"make run",
"/home/user/memcached-operator/bin/ansible-operator run {\"level\":\"info\",\"ts\":1612739145.2871568,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612739148.347306,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612739148.3488882,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612739148.3490262,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612739148.3490646,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612739148.350217,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612739148.3506632,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612739148.350784,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612739148.5511978,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612739148.5512562,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: \"<kind>-sample\"",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmaps",
"NAME STATUS AGE example-config Active 3s",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmap",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc logs deployment/<project_name>-controller-manager -c manager \\ 1 -n <namespace> 2",
"{\"level\":\"info\",\"ts\":1612732105.0579333,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612732105.0587437,\"logger\":\"cmd\",\"msg\":\"WATCH_NAMESPACE environment variable not set. Watching all namespaces.\",\"Namespace\":\"\"} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {\"level\":\"info\",\"ts\":1612732107.768025,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\"127.0.0.1:8080\"} {\"level\":\"info\",\"ts\":1612732107.768796,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612732107.7688773,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612732107.7688901,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612732107.770032,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.770202,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.7850506,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612732107.8853772,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612732107.8854098,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4}",
"containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: \"True\"",
"apiVersion: \"cache.example.com/v1alpha1\" kind: \"Memcached\" metadata: name: \"example-memcached\" annotations: \"ansible.sdk.operatorframework.io/verbosity\": \"4\" spec: size: 4",
"status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: \"True\" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: \"True\" type: Running",
"- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false",
"- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: \"{{ ansible_operator_meta.name }}\" namespace: \"{{ ansible_operator_meta.namespace }}\" status: test: data",
"collections: - operator_sdk.util",
"k8s_status: status: key1: value1",
"mkdir nginx-operator",
"cd nginx-operator",
"operator-sdk init --plugins=helm",
"operator-sdk create api --group demo --version v1 --kind Nginx",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"oc logs deployment.apps/nginx-operator-controller-manager -c manager -n nginx-operator-system",
"oc delete -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/nginx-operator",
"cd USDHOME/projects/nginx-operator",
"operator-sdk init --plugins=helm --domain=example.com --group=demo --version=v1 --kind=Nginx",
"operator-sdk init --plugins helm --help",
"domain: example.com layout: - helm.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: nginx-operator resources: - api: crdVersion: v1 namespaced: true domain: example.com group: demo kind: Nginx version: v1 version: \"3\"",
"Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx +kubebuilder:scaffold:watch",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080",
"- group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: USDHTTP_PROXY",
"proxy: http: \"\" https: \"\" no_proxy: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: \"{{ .Values.proxy.http }}\"",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612652419.9289865,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612652419.9296563,\"logger\":\"helm.controller\",\"msg\":\"Watching resource\",\"apiVersion\":\"demo.example.com/v1\",\"kind\":\"Nginx\",\"namespace\":\"\",\"reconcilePeriod\":\"1m0s\"} {\"level\":\"info\",\"ts\":1612652419.929983,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612652419.930015,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: demo.example.com/v1, Kind=Nginx\"} {\"level\":\"info\",\"ts\":1612652420.2307851,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612652420.2309358,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project nginx-operator-system",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get nginx/nginx-sample -o yaml",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7",
"oc patch nginx nginx-sample -p '{\"spec\":{\"replicaCount\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m",
"oc delete -f config/samples/demo_v1_nginx.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp",
"containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.17",
"FROM registry.redhat.io/openshift4/ose-helm-rhel9-operator:v4.17",
"docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder",
"k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3",
"go mod tidy",
"- curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.2.1/kustomize_v5.2.1_USD(OS)_USD(ARCH).tar.gz | + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.3.0/kustomize_v5.3.0_USD(OS)_USD(ARCH).tar.gz | \\",
"apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2",
"{{ .Values.replicaCount }}",
"oc get Tomcats --all-namespaces",
"mkdir -p USDHOME/github.com/example/memcached-operator",
"cd USDHOME/github.com/example/memcached-operator",
"operator-sdk init --plugins=hybrid.helm.sdk.operatorframework.io --project-version=\"3\" --domain my.domain --repo=github.com/example/memcached-operator",
"operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --group cache --version v1 --kind Memcached",
"operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help",
"Use the 'create api' subcommand to add watches to this file. - group: cache.my.domain version: v1 kind: Memcached chart: helm-charts/memcached #+kubebuilder:scaffold:watch",
"// Operator's main.go // With the help of helpers provided in the library, the reconciler can be // configured here before starting the controller with this reconciler. reconciler := reconciler.New( reconciler.WithChart(*chart), reconciler.WithGroupVersionKind(gvk), ) if err := reconciler.SetupWithManager(mgr); err != nil { panic(fmt.Sprintf(\"unable to create reconciler: %s\", err)) }",
"operator-sdk create api --group=cache --version v1 --kind MemcachedBackup --resource --controller --plugins=go/v4",
"Create Resource [y/n] y Create Controller [y/n] y",
"// MemcachedBackupSpec defines the desired state of MemcachedBackup type MemcachedBackupSpec struct { // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster // Important: Run \"make\" to regenerate code after modifying this file //+kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedBackupStatus defines the observed state of MemcachedBackup type MemcachedBackupStatus struct { // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster // Important: Run \"make\" to regenerate code after modifying this file // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"for _, w := range ws { // Register controller with the factory reconcilePeriod := defaultReconcilePeriod if w.ReconcilePeriod != nil { reconcilePeriod = w.ReconcilePeriod.Duration } maxConcurrentReconciles := defaultMaxConcurrentReconciles if w.MaxConcurrentReconciles != nil { maxConcurrentReconciles = *w.MaxConcurrentReconciles } r, err := reconciler.New( reconciler.WithChart(*w.Chart), reconciler.WithGroupVersionKind(w.GroupVersionKind), reconciler.WithOverrideValues(w.OverrideValues), reconciler.SkipDependentWatches(w.WatchDependentResources != nil && !*w.WatchDependentResources), reconciler.WithMaxConcurrentReconciles(maxConcurrentReconciles), reconciler.WithReconcilePeriod(reconcilePeriod), reconciler.WithInstallAnnotations(annotation.DefaultInstallAnnotations...), reconciler.WithUpgradeAnnotations(annotation.DefaultUpgradeAnnotations...), reconciler.WithUninstallAnnotations(annotation.DefaultUninstallAnnotations...), )",
"// Setup manager with Go API if err = (&controllers.MemcachedBackupReconciler{ Client: mgr.GetClient(), Scheme: mgr.GetScheme(), }).SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"MemcachedBackup\") os.Exit(1) } // Setup manager with Helm API for _, w := range ws { if err := r.SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"Helm\") os.Exit(1) } setupLog.Info(\"configured watch\", \"gvk\", w.GroupVersionKind, \"chartPath\", w.ChartPath, \"maxConcurrentReconciles\", maxConcurrentReconciles, \"reconcilePeriod\", reconcilePeriod) } // Start the manager if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { setupLog.Error(err, \"problem running manager\") os.Exit(1) }",
"--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: manager-role rules: - apiGroups: - \"\" resources: - namespaces verbs: - get - apiGroups: - apps resources: - deployments - daemonsets - replicasets - statefulsets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/finalizers verbs: - create - delete - get - list - patch - update - watch - apiGroups: - \"\" resources: - pods - services - services/finalizers - endpoints - persistentvolumeclaims - events - configmaps - secrets - serviceaccounts verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/status verbs: - get - patch - update - apiGroups: - policy resources: - events - poddisruptionbudgets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcacheds - memcacheds/status - memcacheds/finalizers verbs: - create - delete - get - list - patch - update - watch",
"make install run",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc project <project_name>-system",
"apiVersion: cache.my.domain/v1 kind: Memcached metadata: name: memcached-sample spec: # Default values copied from <project_dir>/helm-charts/memcached/values.yaml affinity: {} autoscaling: enabled: false maxReplicas: 100 minReplicas: 1 targetCPUUtilizationPercentage: 80 fullnameOverride: \"\" image: pullPolicy: IfNotPresent repository: nginx tag: \"\" imagePullSecrets: [] ingress: annotations: {} className: \"\" enabled: false hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific tls: [] nameOverride: \"\" nodeSelector: {} podAnnotations: {} podSecurityContext: {} replicaCount: 3 resources: {} securityContext: {} service: port: 80 type: ClusterIP serviceAccount: annotations: {} create: true name: \"\" tolerations: []",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 18m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 18m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 18m",
"apiVersion: cache.my.domain/v1 kind: MemcachedBackup metadata: name: memcachedbackup-sample spec: size: 2",
"oc apply -f config/samples/cache_v1_memcachedbackup.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcachedbackup-sample-8649699989-4bbzg 1/1 Running 0 22m memcachedbackup-sample-8649699989-mq6mx 1/1 Running 0 22m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"oc delete -f config/samples/cache_v1_memcachedbackup.yaml",
"make undeploy",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp",
"containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.17",
"docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder",
"k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3",
"go mod tidy",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator",
"operator-sdk create api --plugins quarkus --group cache --version v1 --kind Memcached",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator",
"domain: example.com layout: - quarkus.javaoperatorsdk.io/v1-alpha projectName: memcached-operator version: \"3\"",
"operator-sdk create api --plugins=quarkus \\ 1 --group=cache \\ 2 --version=v1 \\ 3 --kind=Memcached 4",
"tree",
". ├── Makefile ├── PROJECT ├── pom.xml └── src └── main ├── java │ └── com │ └── example │ ├── Memcached.java │ ├── MemcachedReconciler.java │ ├── MemcachedSpec.java │ └── MemcachedStatus.java └── resources └── application.properties 6 directories, 8 files",
"public class MemcachedSpec { private Integer size; public Integer getSize() { return size; } public void setSize(Integer size) { this.size = size; } }",
"import java.util.ArrayList; import java.util.List; public class MemcachedStatus { // Add Status information here // Nodes are the names of the memcached pods private List<String> nodes; public List<String> getNodes() { if (nodes == null) { nodes = new ArrayList<>(); } return nodes; } public void setNodes(List<String> nodes) { this.nodes = nodes; } }",
"@Version(\"v1\") @Group(\"cache.example.com\") public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}",
"mvn clean install",
"cat target/kubernetes/memcacheds.cache.example.com-v1.yaml",
"Generated by Fabric8 CRDGenerator, manual edits might get overwritten! apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: memcacheds.cache.example.com spec: group: cache.example.com names: kind: Memcached plural: memcacheds singular: memcached scope: Namespaced versions: - name: v1 schema: openAPIV3Schema: properties: spec: properties: size: type: integer type: object status: properties: nodes: items: type: string type: array type: object type: object served: true storage: true subresources: status: {}",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: # Add spec fields here size: 1",
"<dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.2.2</version> </dependency>",
"package com.example; import io.fabric8.kubernetes.client.KubernetesClient; import io.javaoperatorsdk.operator.api.reconciler.Context; import io.javaoperatorsdk.operator.api.reconciler.Reconciler; import io.javaoperatorsdk.operator.api.reconciler.UpdateControl; import io.fabric8.kubernetes.api.model.ContainerBuilder; import io.fabric8.kubernetes.api.model.ContainerPortBuilder; import io.fabric8.kubernetes.api.model.LabelSelectorBuilder; import io.fabric8.kubernetes.api.model.ObjectMetaBuilder; import io.fabric8.kubernetes.api.model.OwnerReferenceBuilder; import io.fabric8.kubernetes.api.model.Pod; import io.fabric8.kubernetes.api.model.PodSpecBuilder; import io.fabric8.kubernetes.api.model.PodTemplateSpecBuilder; import io.fabric8.kubernetes.api.model.apps.Deployment; import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; import io.fabric8.kubernetes.api.model.apps.DeploymentSpecBuilder; import org.apache.commons.collections.CollectionUtils; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors; public class MemcachedReconciler implements Reconciler<Memcached> { private final KubernetesClient client; public MemcachedReconciler(KubernetesClient client) { this.client = client; } // TODO Fill in the rest of the reconciler @Override public UpdateControl<Memcached> reconcile( Memcached resource, Context context) { // TODO: fill in logic Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get(); if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); } int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize(); if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); } List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList()); if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); } return UpdateControl.noUpdate(); } private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; } private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; } }",
"Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get();",
"if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); }",
"int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize();",
"if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); }",
"List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());",
"if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); }",
"private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; }",
"private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; }",
"mvn clean install",
"[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11.193 s [INFO] Finished at: 2021-05-26T12:16:54-04:00 [INFO] ------------------------------------------------------------------------",
"oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml",
"customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"",
"oc apply -f rbac.yaml",
"java -jar target/quarkus-app/quarkus-run.jar",
"kubectl apply -f memcached-sample.yaml",
"memcached.cache.example.com/memcached-sample created",
"oc get all",
"NAME READY STATUS RESTARTS AGE pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml",
"customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f rbac.yaml",
"oc get all -n default",
"NAME READY UP-TO-DATE AVAILABLE AGE pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18s",
"oc apply -f memcached-sample.yaml",
"memcached.cache.example.com/memcached-sample created",
"oc get all",
"NAME READY STATUS RESTARTS AGE pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp",
"containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.17",
"docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder",
"k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3",
"go mod tidy",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: features.operators.openshift.io/disconnected: \"true\" features.operators.openshift.io/fips-compliant: \"false\" features.operators.openshift.io/proxy-aware: \"false\" features.operators.openshift.io/tls-profiles: \"false\" features.operators.openshift.io/token-auth-aws: \"false\" features.operators.openshift.io/token-auth-azure: \"false\" features.operators.openshift.io/token-auth-gcp: \"false\"",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"3Scale Commercial License\", \"Red Hat Managed Integration\"]'",
"spec: spec: containers: - command: - /manager env: - name: <related_image_environment_variable> 1 value: \"<related_image_reference_with_tag>\" 2",
"// deploymentForMemcached returns a memcached Deployment object Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: \"memcached:1.4.36-alpine\", 1 + Image: os.Getenv(\"<related_image_environment_variable>\"), 2 Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{",
"spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: \"docker.io/memcached:1.4.36-alpine\" 1 + image: \"{{ lookup('env', '<related_image_environment_variable>') }}\" 2 ports: - containerPort: 11211",
"- group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues: 1 relatedImage: USD{<related_image_environment_variable>} 2",
"relatedImage: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.pullPolicy }} env: 1 - name: related_image 2 value: \"{{ .Values.relatedImage }}\" 3",
"BUNDLE_GEN_FLAGS ?= -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq (USD(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif - USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) 1 + USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle USD(BUNDLE_GEN_FLAGS) 2",
"make bundle USE_IMAGE_DIGESTS=true",
"metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\"]'",
"labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2",
"labels: operatorframework.io/os.linux: supported",
"labels: operatorframework.io/arch.amd64: supported",
"labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2",
"metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1",
"metadata: annotations: operatorframework.io/suggested-namespace-template: 1 { \"apiVersion\": \"v1\", \"kind\": \"Namespace\", \"metadata\": { \"name\": \"vertical-pod-autoscaler-suggested-template\", \"annotations\": { \"openshift.io/node-selector\": \"\" } } }",
"module github.com/example-inc/memcached-operator go 1.19 require ( k8s.io/apimachinery v0.26.0 k8s.io/client-go v0.26.0 sigs.k8s.io/controller-runtime v0.14.1 operator-framework/operator-lib v0.11.0 )",
"import ( apiv1 \"github.com/operator-framework/api/pkg/operators/v1\" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, \"apiv1.OperatorUpgradeable\") } cond, err := NewUpgradeable(cl);",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5",
"- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.",
"required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.",
"versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true",
"customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster",
"versions: - name: v1alpha1 served: false 1 storage: true",
"versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2",
"versions: - name: v1beta1 served: true storage: true",
"metadata: annotations: alm-examples: >- [{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdCluster\",\"metadata\":{\"name\":\"example\",\"namespace\":\"<operator_namespace>\"},\"spec\":{\"size\":3,\"version\":\"3.2.13\"}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdRestore\",\"metadata\":{\"name\":\"example-etcd-cluster\"},\"spec\":{\"etcdCluster\":{\"name\":\"example-etcd-cluster\"},\"backupStorageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdBackup\",\"metadata\":{\"name\":\"example-etcd-cluster-backup\"},\"spec\":{\"etcdEndpoints\":[\"<etcd-cluster-endpoints>\"],\"storageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}}]",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '[\"my.internal.crd1.io\",\"my.internal.crd2.io\"]' 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { \"apiVersion\": \"ocs.openshift.io/v1\", \"kind\": \"StorageCluster\", \"metadata\": { \"name\": \"example-storagecluster\" }, \"spec\": { \"manageNodes\": false, \"monPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"10Gi\" } }, \"storageClassName\": \"gp2\" } }, \"storageDeviceSets\": [ { \"count\": 3, \"dataPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"1Ti\" } }, \"storageClassName\": \"gp2\", \"volumeMode\": \"Block\" } }, \"name\": \"example-deviceset\", \"placement\": {}, \"portable\": true, \"resources\": {} } ] } }",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make bundle-build bundle-push catalog-build catalog-push BUNDLE_IMG=<bundle_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>",
"IMAGE_TAG_BASE=quay.io/example/my-operator",
"make bundle-build bundle-push catalog-build catalog-push",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: <operator_namespace> spec: displayName: My Test publisher: Company sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: quay.io/example/memcached-catalog:v0.0.1 2 updateStrategy: registryPoll: interval: 10m",
"oc get catalogsource",
"NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: <operator_namespace> spec: targetNamespaces: - <operator_namespace>",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: <catalog_namespace> spec: channel: \"alpha\" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: <operator_namespace> startingCSV: memcached-operator.v0.0.1",
"oc get og",
"NAME AGE my-test 4h40m",
"oc get csv",
"NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded",
"oc get pods",
"NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m",
"operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1",
"INFO[0006] Creating a File-Based Catalog of the bundle \"quay.io/demo/memcached-operator:v0.0.1\" INFO[0008] Generated a valid File-Based Catalog INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1 INFO[0012] Created CatalogSource: memcached-operator-catalog INFO[0012] OperatorGroup \"operator-sdk-og\" created INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub INFO[0015] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to reach 'Succeeded' phase INFO[0015] Waiting for ClusterServiceVersion \"\"my-project/memcached-operator.v0.0.1\" to appear INFO[0026] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Pending INFO[0028] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Installing INFO[0059] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Succeeded INFO[0059] OLM has successfully installed \"memcached-operator.v0.0.1\"",
"operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2",
"INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0008] Generated a valid Upgraded File-Based Catalog INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name \"quay-io-demo-memcached-operator-v0-0-1\" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" to reach 'Succeeded' phase INFO[0019] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Pending INFO[0042] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: InstallReady INFO[0043] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Installing INFO[0044] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Succeeded INFO[0044] Successfully upgraded to \"memcached-operator.v0.0.2\"",
"operator-sdk cleanup memcached-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<cluster_version>\"}]' 1",
"com.redhat.openshift.versions: \"v4.7-v4.9\" 1",
"LABEL com.redhat.openshift.versions=\"<versions>\" 1",
"spec: securityContext: seccompProfile: type: RuntimeDefault 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"spec: securityContext: 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"containers: - name: my-container securityContext: allowPrivilegeEscalation: false capabilities: add: - \"NET_ADMIN\"",
"install: spec: clusterPermissions: - rules: - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use serviceAccountName: default",
"spec: apiservicedefinitions:{} description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-aws: \"true\"",
"// Get ENV var roleARN := os.Getenv(\"ROLEARN\") setupLog.Info(\"getting role ARN\", \"role ARN = \", roleARN) webIdentityTokenPath := \"/var/run/secrets/openshift/serviceaccount/token\"",
"import ( minterv1 \"github.com/openshift/cloud-credential-operator/pkg/apis/cloudcredential/v1\" corev1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) var in = minterv1.AWSProviderSpec{ StatementEntries: []minterv1.StatementEntry{ { Action: []string{ \"s3:*\", }, Effect: \"Allow\", Resource: \"arn:aws:s3:*:*:*\", }, }, STSIAMRoleARN: \"<role_arn>\", } var codec = minterv1.Codec var ProviderSpec, _ = codec.EncodeProviderSpec(in.DeepCopyObject()) const ( name = \"<credential_request_name>\" namespace = \"<namespace_name>\" ) var CredentialsRequestTemplate = &minterv1.CredentialsRequest{ ObjectMeta: metav1.ObjectMeta{ Name: name, Namespace: \"openshift-cloud-credential-operator\", }, Spec: minterv1.CredentialsRequestSpec{ ProviderSpec: ProviderSpec, SecretRef: corev1.ObjectReference{ Name: \"<secret_name>\", Namespace: namespace, }, ServiceAccountNames: []string{ \"<service_account_name>\", }, CloudTokenPath: \"\", }, }",
"// CredentialsRequest is a struct that represents a request for credentials type CredentialsRequest struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` Metadata struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"metadata\"` Spec struct { SecretRef struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"secretRef\"` ProviderSpec struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` StatementEntries []struct { Effect string `yaml:\"effect\"` Action []string `yaml:\"action\"` Resource string `yaml:\"resource\"` } `yaml:\"statementEntries\"` STSIAMRoleARN string `yaml:\"stsIAMRoleARN\"` } `yaml:\"providerSpec\"` // added new field CloudTokenPath string `yaml:\"cloudTokenPath\"` } `yaml:\"spec\"` } // ConsumeCredsRequestAddingTokenInfo is a function that takes a YAML filename and two strings as arguments // It unmarshals the YAML file to a CredentialsRequest object and adds the token information. func ConsumeCredsRequestAddingTokenInfo(fileName, tokenString, tokenPath string) (*CredentialsRequest, error) { // open a file containing YAML form of a CredentialsRequest file, err := os.Open(fileName) if err != nil { return nil, err } defer file.Close() // create a new CredentialsRequest object cr := &CredentialsRequest{} // decode the yaml file to the object decoder := yaml.NewDecoder(file) err = decoder.Decode(cr) if err != nil { return nil, err } // assign the string to the existing field in the object cr.Spec.CloudTokenPath = tokenPath // return the modified object return cr, nil }",
"// apply CredentialsRequest on install credReq := credreq.CredentialsRequestTemplate credReq.Spec.CloudTokenPath = webIdentityTokenPath c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"func SharedCredentialsFileFromSecret(secret *corev1.Secret) (string, error) { var data []byte switch { case len(secret.Data[\"credentials\"]) > 0: data = secret.Data[\"credentials\"] default: return \"\", errors.New(\"invalid secret for aws credentials\") } f, err := ioutil.TempFile(\"\", \"aws-shared-credentials\") if err != nil { return \"\", errors.Wrap(err, \"failed to create file for shared credentials\") } defer f.Close() if _, err := f.Write(data); err != nil { return \"\", errors.Wrapf(err, \"failed to write credentials to %s\", f.Name()) } return f.Name(), nil }",
"sharedCredentialsFile, err := SharedCredentialsFileFromSecret(secret) if err != nil { // handle error } options := session.Options{ SharedConfigState: session.SharedConfigEnable, SharedConfigFiles: []string{sharedCredentialsFile}, }",
"#!/bin/bash set -x AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") NAMESPACE=my-namespace SERVICE_ACCOUNT_NAME=\"my-service-account\" POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME}\" } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDSERVICE_ACCOUNT_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDSERVICE_ACCOUNT_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"",
"oc exec operator-pod -n <namespace_name> -- cat /var/run/secrets/openshift/serviceaccount/token",
"oc exec operator-pod -n <namespace_name> -- cat /<path>/<to>/<secret_name> 1",
"aws sts assume-role-with-web-identity --role-arn USDROLEARN --role-session-name <session_name> --web-identity-token USDTOKEN",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-azure: \"true\"",
"// Get ENV var clientID := os.Getenv(\"CLIENTID\") tenantID := os.Getenv(\"TENANTID\") subscriptionID := os.Getenv(\"SUBSCRIPTIONID\") azureFederatedTokenFile := \"/var/run/secrets/openshift/serviceaccount/token\"",
"// apply CredentialsRequest on install credReqTemplate.Spec.AzureProviderSpec.AzureClientID = clientID credReqTemplate.Spec.AzureProviderSpec.AzureTenantID = tenantID credReqTemplate.Spec.AzureProviderSpec.AzureRegion = \"centralus\" credReqTemplate.Spec.AzureProviderSpec.AzureSubscriptionID = subscriptionID credReqTemplate.CloudTokenPath = azureFederatedTokenFile c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>",
"<service_account_name>@<project_id>.iam.gserviceaccount.com",
"volumeMounts: - name: bound-sa-token mountPath: /var/run/secrets/openshift/serviceaccount readOnly: true volumes: # This service account token can be used to provide identity outside the cluster. - name: bound-sa-token projected: sources: - serviceAccountToken: path: token audience: openshift",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-gcp: \"true\"",
"// Get ENV var audience := os.Getenv(\"AUDIENCE\") serviceAccountEmail := os.Getenv(\"SERVICE_ACCOUNT_EMAIL\") gcpIdentityTokenFile := \"/var/run/secrets/openshift/serviceaccount/token\"",
"// apply CredentialsRequest on install credReqTemplate.Spec.GCPProviderSpec.Audience = audience credReqTemplate.Spec.GCPProviderSpec.ServiceAccountEmail = serviceAccountEmail credReqTemplate.CloudTokenPath = gcpIdentityTokenFile c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"service_account_json := secret.StringData[\"service_account.json\"]",
"operator-sdk scorecard <bundle_dir_or_image> [flags]",
"operator-sdk scorecard -h",
"./bundle └── tests └── scorecard └── config.yaml",
"kind: Configuration apiversion: scorecard.operatorframework.io/v1alpha3 metadata: name: config stages: - parallel: true tests: - image: quay.io/operator-framework/scorecard-test:v1.36.1 entrypoint: - scorecard-test - basic-check-spec labels: suite: basic test: basic-check-spec-test - image: quay.io/operator-framework/scorecard-test:v1.36.1 entrypoint: - scorecard-test - olm-bundle-validation labels: suite: olm test: olm-bundle-validation-test",
"make bundle",
"operator-sdk scorecard <bundle_dir_or_image>",
"{ \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"kind\": \"TestList\", \"items\": [ { \"kind\": \"Test\", \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"spec\": { \"image\": \"quay.io/operator-framework/scorecard-test:v1.36.1\", \"entrypoint\": [ \"scorecard-test\", \"olm-bundle-validation\" ], \"labels\": { \"suite\": \"olm\", \"test\": \"olm-bundle-validation-test\" } }, \"status\": { \"results\": [ { \"name\": \"olm-bundle-validation\", \"log\": \"time=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found metadata directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Getting mediaType info from manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Found annotations file\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Could not find optional dependencies file\\\" name=bundle-test\\n\", \"state\": \"pass\" } ] } } ] }",
"-------------------------------------------------------------------------------- Image: quay.io/operator-framework/scorecard-test:v1.36.1 Entrypoint: [scorecard-test olm-bundle-validation] Labels: \"suite\":\"olm\" \"test\":\"olm-bundle-validation-test\" Results: Name: olm-bundle-validation State: pass Log: time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found metadata directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Found annotations file\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'",
"apiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true 1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.36.1 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.36.1 labels: suite: olm test: olm-bundle-validation-test",
"// Copyright 2020 The Operator-SDK Authors // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( \"encoding/json\" \"fmt\" \"log\" \"os\" scapiv1alpha3 \"github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3\" apimanifests \"github.com/operator-framework/api/pkg/manifests\" ) // This is the custom scorecard test example binary // As with the Redhat scorecard test image, the bundle that is under // test is expected to be mounted so that tests can inspect the // bundle contents as part of their test implementations. // The actual test is to be run is named and that name is passed // as an argument to this binary. This argument mechanism allows // this binary to run various tests all from within a single // test image. const PodBundleRoot = \"/bundle\" func main() { entrypoint := os.Args[1:] if len(entrypoint) == 0 { log.Fatal(\"Test name argument is required\") } // Read the pod's untar'd bundle from a well-known path. cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot) if err != nil { log.Fatal(err.Error()) } var result scapiv1alpha3.TestStatus // Names of the custom tests which would be passed in the // `operator-sdk` command. switch entrypoint[0] { case CustomTest1Name: result = CustomTest1(cfg) case CustomTest2Name: result = CustomTest2(cfg) default: result = printValidTests() } // Convert scapiv1alpha3.TestResult to json. prettyJSON, err := json.MarshalIndent(result, \"\", \" \") if err != nil { log.Fatal(\"Failed to generate json\", err) } fmt.Printf(\"%s\\n\", string(prettyJSON)) } // printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are. func printValidTests() scapiv1alpha3.TestStatus { result := scapiv1alpha3.TestResult{} result.State = scapiv1alpha3.FailState result.Errors = make([]string, 0) result.Suggestions = make([]string, 0) str := fmt.Sprintf(\"Valid tests for this image include: %s %s\", CustomTest1Name, CustomTest2Name) result.Errors = append(result.Errors, str) return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{result}, } } const ( CustomTest1Name = \"customtest1\" CustomTest2Name = \"customtest2\" ) // Define any operator specific custom tests here. // CustomTest1 and CustomTest2 are example test functions. Relevant operator specific // test logic is to be implemented in similarly. func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest1Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest2Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus { return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{r}, } }",
"operator-sdk bundle validate <bundle_dir_or_image> <flags>",
"./bundle ├── manifests │ ├── cache.my.domain_memcacheds.yaml │ └── memcached-operator.clusterserviceversion.yaml └── metadata └── annotations.yaml",
"INFO[0000] All validation tests have completed successfully",
"ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD \"cache.example.com/v1alpha1, Kind=Memcached\" is present in bundle \"\" but not defined in CSV",
"WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found INFO[0000] All validation tests have completed successfully",
"operator-sdk bundle validate -h",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"operator-sdk bundle validate ./bundle",
"operator-sdk bundle validate <bundle_registry>/<bundle_image_name>:<tag>",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD \"k8sevents.k8s.k8sevent.com\" has an empty description",
"operator-sdk bundle validate ./bundle --select-optional name=multiarch",
"INFO[0020] All validation tests have completed successfully",
"ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.ppc64le) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.s390x) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.amd64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.arm64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1]",
"WARN[0014] Warning: Value test-operator.v0.0.1: check if the CSV is missing the label (operatorframework.io/arch.<value>) for the Arch(s): [\"amd64\" \"arm64\" \"ppc64le\" \"s390x\"]. Be aware that your Operator manager image [\"quay.io/example-org/test-operator:v1alpha1\"] provides this support. Thus, it is very likely that you want to provide it and if you support more than amd64 architectures, you MUST,use the required labels for all which are supported.Otherwise, your solution cannot be listed on the cluster for these architectures",
"// Simple query nn := types.NamespacedName{ Name: \"cluster\", } infraConfig := &configv1.Infrastructure{} err = crClient.Get(context.Background(), nn, infraConfig) if err != nil { return err } fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.InfrastructureTopology)",
"operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second) infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister() infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), \"cluster\", metav1.GetOptions{}) if err != nil { return err } // fmt.Printf(\"%v\\n\", infraConfig) fmt.Printf(\"%v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"%v\\n\", infraConfig.Status.InfrastructureTopology)",
"../prometheus",
"package controllers import ( \"github.com/prometheus/client_golang/prometheus\" \"sigs.k8s.io/controller-runtime/pkg/metrics\" ) var ( widgets = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widgets_total\", Help: \"Number of widgets processed\", }, ) widgetFailures = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widget_failures_total\", Help: \"Number of failed widgets\", }, ) ) func init() { // Register custom metrics with the global prometheus registry metrics.Registry.MustRegister(widgets, widgetFailures) }",
"func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Add metrics widgets.Inc() widgetFailures.Inc() return ctrl.Result{}, nil }",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-k8s-role namespace: memcached-operator-system rules: - apiGroups: - \"\" resources: - endpoints - pods - services - nodes - secrets verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus-k8s-rolebinding namespace: memcached-operator-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus-k8s-role subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring",
"oc apply -f config/prometheus/role.yaml",
"oc apply -f config/prometheus/rolebinding.yaml",
"oc label namespace <operator_namespace> openshift.io/cluster-monitoring=\"true\"",
"operator-sdk init --plugins=ansible --domain=testmetrics.com",
"operator-sdk create api --group metrics --version v1 --kind Testmetrics --generate-role",
"--- tasks file for Memcached - name: start k8sstatus k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211 - osdk_metric: name: my_thing_counter description: This metric counts things counter: {} - osdk_metric: name: my_counter_metric description: Add 3.14 to the counter counter: increment: yes - osdk_metric: name: my_gauge_metric description: Create my gauge and set it to 2. gauge: set: 2 - osdk_metric: name: my_histogram_metric description: Observe my histogram histogram: observe: 2 - osdk_metric: name: my_summary_metric description: Observe my summary summary: observe: 2",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: metrics.testmetrics.com/v1 kind: Testmetrics metadata: name: testmetrics-sample spec: size: 1",
"oc create -f config/samples/metrics_v1_testmetrics.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE ansiblemetrics-controller-manager-<id> 2/2 Running 0 149m testmetrics-sample-memcached-<id> 1/1 Running 0 147m",
"oc get ep",
"NAME ENDPOINTS AGE ansiblemetrics-controller-manager-metrics-service 10.129.2.70:8443 150m",
"token=`oc create token prometheus-k8s -n openshift-monitoring`",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep my_counter",
"HELP my_counter_metric Add 3.14 to the counter TYPE my_counter_metric counter my_counter_metric 2",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep gauge",
"HELP my_gauge_metric Create my gauge and set it to 2.",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep Observe",
"HELP my_histogram_metric Observe my histogram HELP my_summary_metric Observe my summary",
"import ( \"github.com/operator-framework/operator-sdk/pkg/leader\" ) func main() { err = leader.Become(context.TODO(), \"memcached-operator-lock\") if err != nil { log.Error(err, \"Failed to retry for leader lock\") os.Exit(1) } }",
"import ( \"sigs.k8s.io/controller-runtime/pkg/manager\" ) func main() { opts := manager.Options{ LeaderElection: true, LeaderElectionID: \"memcached-operator-lock\" } mgr, err := manager.New(cfg, opts) }",
"docker manifest inspect <image_manifest> 1",
"{ \"manifests\": [ { \"digest\": \"sha256:c0669ef34cdc14332c0f1ab0c2c01acb91d96014b172f1a76f3a39e63d1f0bda\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"amd64\", \"os\": \"linux\" }, \"size\": 528 }, { \"digest\": \"sha256:30e6d35703c578ee703230b9dc87ada2ba958c1928615ac8a674fcbbcbb0f281\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"arm64\", \"os\": \"linux\", \"variant\": \"v8\" }, \"size\": 528 }, ] }",
"docker inspect <image>",
"FROM golang:1.19 as builder ARG TARGETOS ARG TARGETARCH RUN CGO_ENABLED=0 GOOS=USD{TARGETOS:-linux} GOARCH=USD{TARGETARCH} go build -a -o manager main.go 1",
"PLATFORMS ?= linux/arm64,linux/amd64 1 .PHONY: docker-buildx",
"make docker-buildx IMG=<image_registry>/<organization_name>/<repository_name>:<version_or_sha>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: 2 - matchExpressions: 3 - key: kubernetes.io/arch 4 operator: In values: - amd64 - arm64 - ppc64le - s390x - key: kubernetes.io/os 5 operator: In values: - linux",
"Template: corev1.PodTemplateSpec{ Spec: corev1.PodSpec{ Affinity: &corev1.Affinity{ NodeAffinity: &corev1.NodeAffinity{ RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ NodeSelectorTerms: []corev1.NodeSelectorTerm{ { MatchExpressions: []corev1.NodeSelectorRequirement{ { Key: \"kubernetes.io/arch\", Operator: \"In\", Values: []string{\"amd64\",\"arm64\",\"ppc64le\",\"s390x\"}, }, { Key: \"kubernetes.io/os\", Operator: \"In\", Values: []string{\"linux\"}, }, }, }, }, }, }, }, SecurityContext: &corev1.PodSecurityContext{ }, Containers: []corev1.Container{{ }}, },",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 1 - preference: matchExpressions: 2 - key: kubernetes.io/arch 3 operator: In 4 values: - amd64 - arm64 weight: 90 5",
"cfg = Config{ log: logf.Log.WithName(\"prune\"), DryRun: false, Clientset: client, LabelSelector: \"app=<operator_name>\", Resources: []schema.GroupVersionKind{ {Group: \"\", Version: \"\", Kind: PodKind}, }, Namespaces: []string{\"<operator_namespace>\"}, Strategy: StrategyConfig{ Mode: MaxCountStrategy, MaxCountSetting: 1, }, PreDeleteHook: myhook, }",
"err := cfg.Execute(ctx)",
"packagemanifests/ └── etcd ├── 0.0.1 │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml ├── 0.0.2 │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ └── etcdrestore.crd.yaml └── etcd.package.yaml",
"bundle/ ├── bundle-0.0.1 │ ├── bundle.Dockerfile │ ├── manifests │ │ ├── etcdcluster.crd.yaml │ │ ├── etcdoperator.clusterserviceversion.yaml │ ├── metadata │ │ └── annotations.yaml │ └── tests │ └── scorecard │ └── config.yaml └── bundle-0.0.2 ├── bundle.Dockerfile ├── manifests │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ ├── etcdrestore.crd.yaml ├── metadata │ └── annotations.yaml └── tests └── scorecard └── config.yaml",
"operator-sdk pkgman-to-bundle <package_manifests_dir> \\ 1 [--output-dir <directory>] \\ 2 --image-tag-base <image_name_base> 3",
"operator-sdk run bundle <bundle_image_name>:<tag>",
"INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup \"operator-sdk-og\" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to appear INFO[0048] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Pending INFO[0049] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Installing INFO[0064] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Succeeded INFO[0065] OLM has successfully installed \"etcdoperator.v0.9.4\"",
"operator-sdk <command> [<subcommand>] [<argument>] [<flags>]",
"operator-sdk completion bash",
"bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operators/developing-operators |
3.2. Managing Cluster Nodes | 3.2. Managing Cluster Nodes The following sections describe the commands you use to manage cluster nodes, including commands to stop cluster services and to add and remove cluster nodes. 3.2.1. Stopping Cluster Services The following command stops cluster services on the specified node or nodes. As with the pcs cluster start , the --all option stops cluster services on all nodes and if you do not specify any nodes, cluster services are stopped on the local node only. You can force a stop of cluster services on the local node with the following command, which performs a kill -9 command. | [
"pcs cluster stop [--all] [ node ] [...]",
"pcs cluster kill"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-clusternodemanage-haar |
Part IV. Device Drivers | Part IV. Device Drivers This part provides a comprehensive listing of all device drivers that are new or have been updated in Red Hat Enterprise Linux 7.5. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/part-red_hat_enterprise_linux-7.5_release_notes-device_drivers |
Chapter 8. Deploying and configuring a Postfix SMTP server | Chapter 8. Deploying and configuring a Postfix SMTP server As a system administrator, you can configure your email infrastructure by using a mail transport agent (MTA), such as Postfix, to transport email messages between hosts using the SMTP protocol. Postfix is a server-side application for routing and delivering mail. You can use Postfix to set up a local mail server, create a null-client mail relay, use a Postfix server as a destination for multiple domains, or choose an LDAP directory instead of files for lookups. The key features of Postfix: Security features to protect against common email related threats Customization options, including support for virtual domains and aliases 8.1. Overview of the main Postfix configuration files The postfix package provides multiple configuration files in the /etc/postfix/ directory. To configure your email infrastructure, use the following configuration files: main.cf - contains the global configuration of Postfix. master.cf - specifies Postfix interaction with various processes to accomplish mail delivery. access - specifies access rules, for example hosts that are allowed to connect to Postfix. transport - maps email addresses to relay hosts. aliases - contains a configurable list required by the mail protocol that describes user ID aliases. Note that you can find this file in the /etc/ directory. 8.2. Installing and configuring a Postfix SMTP server You can configure your Postfix SMTP server to receive, store, and deliver email messages. If the mail server package is not selected during the system installation, Postfix will not be available by default. Perform the following steps to install Postfix: Prerequisites You have the root access. Register your system Procedure Disable and remove the Sendmail utility: Install Postfix: To configure Postfix, edit the /etc/postfix/main.cf file and make the following changes: By default, Postfix receives emails only on the loopback interface. To configure Postfix to listen on specific interfaces, update the inet_interfaces parameter to the IP addresses of these interfaces: To configure Postfix to listen on all interfaces, set: If you want that Postfix uses a different hostname than the fully-qualified domain name (FQDN) that is returned by the gethostname() function, add the myhostname parameter: For example, Postfix adds this hostname to header of emails it processes. If the domain name differs from the one in the myhostname parameter, add the mydomain parameter: Add the myorigin parameter and set it to the value of mydomain : With this setting, Postfix uses the domain name as origin for locally posted mails instead of the hostname. Add the mynetworks parameter, and define the IP ranges of trusted networks that are allowed to send mails: If clients from not trustworthy networks, such as the Internet, should be able to send mails through this server, you must configure relay restrictions in a later step. Verify if the Postfix configuration in the main.cf file is correct: Enable the postfix service to start at boot and start it: Allow the smtp traffic through firewall and reload the firewall rules: Verification Verify that the postfix service is running: Optional: Restart the postfix service, if the output is stopped, waiting, or the service is not running: Optional: Reload the postfix service after changing any options in the configuration files in the /etc/postfix/ directory to apply those changes: Verify the email communication between local users on your system: To verify that your mail server does not relay emails from external IP ranges to foreign domains, follow the below mentioned procedure: Log in to a client which is not within the subnets that you defined in mynetworks . Configure the client to use your mail server. Try to send an email to an email address that is not under the domain you specified in mydomain on your mail server. For example, try to send an email to [email protected] . Check the /var/log/maillog file: Troubleshooting In case of errors, check the /var/log/maillog file. Additional resources The /etc/postfix/main.cf configuration file The /usr/share/doc/postfix/README_FILES directory Using and configuring firewalld 8.3. Customizing TLS settings of a Postfix server To make your email traffic encrypted and therefore more secure, you can configure Postfix to use a certificate from a trusted certificate authority (CA) instead of the self-signed certificate and customize the Transport Layer Security (TLS) security settings. In RHEL 8, the TLS encryption protocol is enabled in the Postfix server by default. The basic Postfix TLS configuration contains self-signed certificates for inbound SMTP and the opportunistic TLS for outbound SMTP. Prerequisites You have the root access. You have the postfix package installed on your server. You have a certificate signed by a trusted certificate authority (CA) and a private key. You have copied the following files to the Postfix server: The server certificate: /etc/pki/tls/certs/postfix.pem The private key: /etc/pki/tls/private/postfix.key Procedure Set the path to the certificate and private key files on the server where Postfix is running by adding the following lines to the /etc/postfix/main.cf file: Restrict the incoming SMTP connections to authenticated users only by editing the /etc/postfix/main.cf file: Reload the postfix service to apply the changes: Verification Configure your client to use TLS encryption and send an email. Note To get additional information about Postfix client TLS activity, increase the log level from 0 to 1 by changing the following line in the /etc/postfix/main.cf : 8.4. Configuring Postfix to forward all emails to a mail relay If you want to forward all email to a mail relay, you can configure Postfix server as a null client. In this configuration Postfix only forwards mail to a different mail server and is not capable of receiving mail. Prerequisites You have the root access. You have the postfix package installed on your server. You have the IP address or hostname of the relay host to which you want to forward emails. Procedure To prevent Postfix from accepting any local email delivery and making it a null client, edit the /etc/postfix/main.cf file and make the following changes: Configure Postfix to forward all email by setting the mydestination parameter equal to an empty value: In this configuration the Postfix server is not a destination for any email and acts as a null client. Specify the mail relay server that receives the email from your null client: The relay host is responsible for the mail delivery. Enclose <ip_address_or_hostname> in square brackets. Configure the Postfix mail server to listen only on the loopback interface for emails to deliver: If you want Postfix to rewrite the sender domain of all outgoing emails to the company domain of your relay mail server, set: To disable the local mail delivery, add the following directive at the end of the configuration file: Add the mynetworks parameter so that Postfix forwards email from the local system originating from the 127.0.0.0/8 IPv4 network and the [::1]/128 IPv6 network to the mail relay server: Verify if the Postfix configuration in the main.cf file is correct: Restart the postfix service to apply the changes: Verification Verify that the email communication is forwarded to the mail relay: Troubleshooting In case of errors, check the /var/log/maillog file. Additional resources The /etc/postfix/main.cf configuration file 8.5. Configuring Postfix as a destination for multiple domains You can configure Postfix as a mail server that can receive emails for multiple domains. In this configuration, Postfix acts as the final destination for emails sent to addresses within the specified domains. You can configure the following: Set up multiple email addresses that point to the same email destination Route incoming email for multiple domains to the same Postfix server Prerequisites You have the root access. You have configured a Postfix server. Procedure In the /etc/postfix/virtual virtual alias file, specify the email addresses for each domain. Add each email address on a new line: In this example, Postfix redirects all emails sent to [email protected] to [email protected] and email sent to [email protected] to [email protected]. Create a hash file for the virtual alias map: This command creates the /etc/postfix/virtual.db file. Note that you must always re-run this command after you update the /etc/postfix/virtual file. In the Postfix /etc/postfix/main.cf configuration file, add the virtual_alias_maps parameter and point it to the hash file: Reload the postfix service to apply the changes: Verification Test the configuration by sending an email to one of the virtual email addresses. Troubleshooting In case of errors, check the /var/log/maillog file. 8.6. Using an LDAP directory as a lookup table If you use a Lightweight Directory Access Protocol (LDAP) server to store accounts, domains or aliases, you can configure Postfix to use the LDAP server as a lookup table. Using LDAP instead of files for lookups enables you to have a central database. Prerequisites You have the root access. You have the postfix package installed on your server. You have an LDAP server with the required schema and user credentials. You have the postfix-ldap plugin installed on the server running Postfix. Procedure Configure the LDAP lookup parameters by creating a /etc/postfix/ldap-aliases.cf file with the following content: Specify the hostname of the LDAP server: Specify the base domain name for the LDAP search: Optional: Customize the LDAP search filter and attributes based on your requirements. The filter for searching the directory defaults to query_filter = mailacceptinggeneralid=%s . Enable the LDAP source as a lookup table in the /etc/postfix/main.cf configuration file by adding the following content: Verify the LDAP configuration by running the postmap command, which checks for any syntax errors or connectivity issues: Reload the postfix service to apply the changes: Verification Send a test email to verify that the LDAP lookup works correctly. Check the mail logs in /var/log/maillog for any errors. Additional resources /usr/share/doc/postfix/README_FILES/LDAP_README file /usr/share/doc/postfix/README_FILES/DATABASE_README file 8.7. Configuring Postfix as an outgoing mail server to relay for authenticated users You can configure Postfix to relay mail for authenticated users. In this scenario, you allow users to authenticate themselves and use their email address to send mail through your SMTP server by configuring Postfix as an outgoing mail server with SMTP authentication, TLS encryption, and sender address restrictions. Prerequisites You have the root access. You have configured a Postfix server. Procedure To configure Postfix as an outgoing mail server, edit the /etc/postfix/main.cf file and add the following: Enable SMTP authentication: Disable access without TLS: Allow mail relaying only for authenticated users: Optional: Restrict users to use their own email address only as a sender: Reload the postfix service to apply the changes: Verification Authenticate in your SMTP client that supports TLS and SASL. Send an test email to verify that the SMTP authentication works correctly. 8.8. Delivering email from Postfix to Dovecot running on the same host You can configure Postfix to deliver incoming mail to Dovecot on the same host using LMTP over a UNIX socket. This socket enables direct communication between Postfix and Dovecot on the local machine. Prerequisites You have the root access. You have configured a Postfix server. You have configured a Dovecot server, see Configuring and maintaining a Dovecot IMAP and POP3 server . You have configured the LMTP socket on your Dovecot server, see Configuring an LMTP socket and LMTPS listener . Procedure Configure Postfix to use the LMTP protocol and the UNIX domain socket for delivering mail to Dovecot in the /etc/postfix/main.cf file: If you want to use virtual mailboxes, add the following content: If you want to use non-virtual mailboxes, add the following content: Reload postfix to apply the changes: Verification Send an test email to verify that the LMTP socket works correctly. Check the mail logs in /var/log/maillog for any errors. 8.9. Delivering email from Postfix to Dovecot running on a different host You can establish a secure connection between Postfix mail server and the Dovecot delivery agent over the network. To do so, configure the LMTP service to use network socket for delivering mail between mail servers. By default, the LMTP protocol is not encrypted. However, if you configured TLS encryption, Dovecot uses the same settings automatically for the LMTP service. SMTP servers can then connect to it using the STARTTLS command over LMTP. Prerequisites You have the root access. You have configured a Postfix server. You have configured a Dovecot server, see Configuring and maintaining a Dovecot IMAP and POP3 server . You have configured the LMTP service on your Dovecot server, see Configuring an LMTP socket and LMTPS listener . Procedure Configure Postfix to use the LMTP protocol and the INET domain socket for delivering mail to Dovecot in the /etc/postfix/main.cf file by adding the following content: Replace <dovecot_host> with the IP address or hostname of the Dovecot server and <port> with the port number of the LMTP service. Reload the postfix service to apply the changes: Verification Send an test email to an address hosted by the remote Dovecot server and check the Dovecot logs to ensure that the mail was successfully delivered. 8.10. Securing the Postfix service Postfix is a mail transfer agent (MTA) that uses the Simple Mail Transfer Protocol (SMTP) to deliver electronic messages between other MTAs and to email clients or delivery agents. Although MTAs can encrypt traffic between one another, they might not do so by default. You can also mitigate risks to various attacks by changing setting to more secure values. 8.10.1. Reducing Postfix network-related security risks To reduce the risk of attackers invading your system through the network, perform as many of the following tasks as possible. Do not share the /var/spool/postfix/ mail spool directory on a Network File System (NFS) shared volume. NFSv2 and NFSv3 do not maintain control over user and group IDs. Therefore, if two or more users have the same UID, they can receive and read each other's mail, which is a security risk. Note This rule does not apply to NFSv4 using Kerberos, because the SECRPC_GSS kernel module does not use UID-based authentication. However, to reduce the security risks, you should not put the mail spool directory on NFS shared volumes. To reduce the probability of Postfix server exploits, mail users must access the Postfix server using an email program. Do not allow shell accounts on the mail server, and set all user shells in the /etc/passwd file to /sbin/nologin (with the possible exception of the root user). To protect Postfix from a network attack, it is set up to only listen to the local loopback address by default. You can verify this by viewing the inet_interfaces = localhost line in the /etc/postfix/main.cf file. This ensures that Postfix only accepts mail messages (such as cron job reports) from the local system and not from the network. This is the default setting and protects Postfix from a network attack. To remove the localhost restriction and allow Postfix to listen on all interfaces, set the inet_interfaces parameter to all in /etc/postfix/main.cf . 8.10.2. Postfix configuration options for limiting DoS attacks An attacker can flood the server with traffic, or send information that triggers a crash, causing a denial of service (DoS) attack. You can configure your system to reduce the risk of such attacks by setting limits in the /etc/postfix/main.cf file. You can change the value of the existing directives or you can add new directives with custom values in the <directive> = <value> format. Use the following list of directives for limiting a DoS attack: smtpd_client_connection_rate_limit Limits the maximum number of connection attempts any client can make to this service per time unit. The default value is 0 , which means a client can make as many connections per time unit as Postfix can accept. By default, the directive excludes clients in trusted networks. anvil_rate_time_unit Defines a time unit to calculate the rate limit. The default value is 60 seconds. smtpd_client_event_limit_exceptions Excludes clients from the connection and rate limit commands. By default, the directive excludes clients in trusted networks. smtpd_client_message_rate_limit Defines the maximum number of message deliveries from client to request per time unit (regardless of whether or not Postfix actually accepts those messages). default_process_limit Defines the default maximum number of Postfix child processes that provide a given service. You can ignore this rule for specific services in the master.cf file. By default, the value is 100 . queue_minfree Defines the minimum amount of free space required to receive mail in the queue file system. The directive is currently used by the Postfix SMTP server to decide if it accepts any mail at all. By default, the Postfix SMTP server rejects MAIL FROM commands when the amount of free space is less than 1.5 times the message_size_limit . To specify a higher minimum free space limit, specify a queue_minfree value that is at least 1.5 times the message_size_limit . By default, the queue_minfree value is 0 . header_size_limit Defines the maximum amount of memory in bytes for storing a message header. If a header is large, it discards the excess header. By default, the value is 102400 bytes. message_size_limit Defines the maximum size of a message including the envelope information in bytes. By default, the value is 10240000 bytes. 8.10.3. Configuring Postfix to use SASL Postfix supports Simple Authentication and Security Layer (SASL) based SMTP Authentication (AUTH). SMTP AUTH is an extension of the Simple Mail Transfer Protocol. Currently, the Postfix SMTP server supports the SASL implementations in the following ways: Dovecot SASL The Postfix SMTP server can communicate with the Dovecot SASL implementation using either a UNIX-domain socket or a TCP socket. Use this method if Postfix and Dovecot applications are running on separate machines. Cyrus SASL When enabled, SMTP clients must authenticate with the SMTP server using an authentication method supported and accepted by both the server and the client. Prerequisites The dovecot package is installed on the system Procedure Set up Dovecot: Include the following lines in the /etc/dovecot/conf.d/10-master.conf file: The example uses UNIX-domain sockets for communication between Postfix and Dovecot. The example also assumes default Postfix SMTP server settings, which include the mail queue located in the /var/spool/postfix/ directory, and the application running under the postfix user and group. Optional: Set up Dovecot to listen for Postfix authentication requests through TCP: Specify the method that the email client uses to authenticate with Dovecot by editing the auth_mechanisms parameter in /etc/dovecot/conf.d/10-auth.conf file: The auth_mechanisms parameter supports different plaintext and non-plaintext authentication methods. Set up Postfix by modifying the /etc/postfix/main.cf file: Enable SMTP Authentication on the Postfix SMTP server: Enable the use of Dovecot SASL implementation for SMTP Authentication: Provide the authentication path relative to the Postfix queue directory. Note that the use of a relative path ensures that the configuration works regardless of whether the Postfix server runs in chroot or not: This step uses UNIX-domain sockets for communication between Postfix and Dovecot. To configure Postfix to look for Dovecot on a different machine in case you use TCP sockets for communication, use configuration values similar to the following: In the example, replace the ip-address with the IP address of the Dovecot machine and port-number with the port number specified in Dovecot's /etc/dovecot/conf.d/10-master.conf file. Specify SASL mechanisms that the Postfix SMTP server makes available to clients. Note that you can specify different mechanisms for encrypted and unencrypted sessions. The directives specify that during unencrypted sessions, no anonymous authentication is allowed and no mechanisms that transmit unencrypted user names or passwords are allowed. For encrypted sessions that use TLS, only non-anonymous authentication mechanisms are allowed. Additional resources Postfix SMTP server policy - SASL mechanism properties Postfix and Dovecot SASL Configuring SASL authentication in the Postfix SMTP server | [
"yum remove sendmail",
"yum install postfix",
"inet_interfaces = 127.0.0.1/32, [::1]/128, 192.0.2.1, [2001:db8:1::1]",
"inet_interfaces = all",
"myhostname = <smtp.example.com>",
"mydomain = <example.com>",
"myorigin = USDmydomain",
"mynetworks = 127.0.0.1/32, [::1]/128, 192.0.2.1/24, [2001:db8:1::1]/64",
"postfix check",
"systemctl enable --now postfix",
"firewall-cmd --permanent --add-service smtp firewall-cmd --reload",
"systemctl status postfix",
"systemctl restart postfix",
"systemctl reload postfix",
"echo \"This is a test message\" | mail -s <SUBJECT> <[email protected]>",
"554 Relay access denied - the server is not going to relay. 250 OK or similar - the server is going to relay.",
"smtpd_tls_cert_file = /etc/pki/tls/certs/postfix.pem smtpd_tls_key_file = /etc/pki/tls/private/postfix.key",
"smtpd_tls_auth_only = yes",
"systemctl reload postfix",
"smtp_tls_loglevel = 1",
"mydestination =",
"relayhost = <[ip_address_or_hostname]>",
"inet_interfaces = loopback-only",
"myorigin = <relay.example.com>",
"local_transport = error: local delivery disabled",
"mynetworks = 127.0.0.0/8, [::1]/128",
"postfix check",
"systemctl restart postfix",
"echo \"This is a test message\" | mail -s <SUBJECT> <[email protected]>",
"<[email protected]> <[email protected]> <[email protected]> <[email protected]>",
"postmap /etc/postfix/virtual",
"virtual_alias_maps = hash:/etc/postfix/virtual",
"systemctl reload postfix",
"server_host = <ldap.example.com>",
"search_base = dc= <example> ,dc= <com>",
"virtual_alias_maps = ldap:/etc/postfix/ldap-aliases.cf",
"postmap -q @ <example.com> ldap:/etc/postfix/ldap-aliases.cf",
"systemctl reload postfix",
"smtpd_sasl_auth_enable = yes broken_sasl_auth_clients = yes",
"smtpd_tls_auth_only = yes",
"smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination",
"smtpd_sender_restrictions = reject_sender_login_mismatch",
"systemctl reload postfix",
"virtual_transport = lmtp:unix:/var/run/dovecot/lmtp",
"mailbox_transport = lmtp:unix:/var/run/dovecot/lmtp",
"systemctl reload postfix",
"mailbox_transport = lmtp:inet: <dovecot_host> : <port>",
"systemctl reload postfix",
"service auth { unix_listener /var/spool/postfix/private/auth { mode = 0660 user = postfix group = postfix } }",
"service auth { inet_listener { port = port-number } }",
"auth_mechanisms = plain login",
"smtpd_sasl_auth_enable = yes",
"smtpd_sasl_type = dovecot",
"smtpd_sasl_path = private/auth",
"smtpd_sasl_path = inet: ip-address : port-number",
"smtpd_sasl_security_options = noanonymous, noplaintext smtpd_sasl_tls_security_options = noanonymous"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_different_types_of_servers/assembly_mail-transport-agent_deploying-different-types-of-servers |
5.2. Generating SELinux Policy Modules: sepolicy generate | 5.2. Generating SELinux Policy Modules: sepolicy generate In versions of Red Hat Enterprise Linux, the sepolgen or selinux-polgengui utilities were used for generating a SELinux policy. These tools have been merged to the sepolicy suite. In Red Hat Enterprise Linux 7, the sepolicy generate command is used to generate an initial SELinux policy module template. Unlike sepolgen , it is not necessary to run sepolicy generate as the root user. This utility also creates an RPM spec file, which can be used to build an RPM package that installs the policy package file ( NAME .pp ) and the interface file ( NAME .if ) to the correct location, provides installation of the SELinux policy into the kernel, and fixes the labeling. The setup script continues to install SELinux policy and sets up the labeling. In addition, a manual page based on the installed policy is generated using the sepolicy manpage command. [7] Finally, sepolicy generate builds and compiles the SELinux policy and the manual page into an RPM package, ready to be installed on other systems. When sepolicy generate is executed, the following files are produced: NAME .te - type enforcing file This file defines all the types and rules for a particular domain. NAME .if - interface file This file defines the default file context for the system. It takes the file types created in the NAME.te file and associates file paths to the types. Utilities, such as restorecon and rpm , use these paths to write labels. NAME _selinux.spec - RPM spec file This file is an RPM spec file that installs SELinux policy and sets up the labeling. This file also installs the interface file and a man page describing the policy. You can use the sepolicy manpage -d NAME command to generate the man page. NAME .sh - helper shell script This script helps to compile, install, and fix the labeling on the system. It also generates a man page based on the installed policy, compiles, and builds an RPM package suitable to be installed on other systems. If it is possible to generate an SELinux policy module, sepolicy generate prints out all generated paths from the source domain to the target domain. See the sepolicy-generate (8) manual page for further information about sepolicy generate . [7] See Section 5.4, "Generating Manual Pages: sepolicy manpage " for more information about sepolicy manpage . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/security-enhanced_linux-the-sepolicy-suite-sepolicy_generate |
C.5. Common Post-Installation Tasks | C.5. Common Post-Installation Tasks The following sections are about common post-installation tasks. C.5.1. Set a Randomly Generated Key as an Additional Way to Access an Encrypted Block Device The following sections are about generating keys and adding keys. C.5.1.1. Generate a Key This will generate a 256-bit key in the file USDHOME/keyfile . C.5.1.2. Add the Key to an Available Keyslot on the Encrypted Device | [
"dd if=/dev/urandom of=USDHOME/keyfile bs=32 count=1 chmod 600 USDHOME/keyfile",
"cryptsetup luksAddKey <device> ~/keyfile"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/apcs05 |
Chapter 3. Types of instance storage | Chapter 3. Types of instance storage The virtual storage that is available to an instance is defined by the flavor used to launch the instance. The following virtual storage resources can be associated with an instance: Instance disk Ephemeral storage Swap storage Persistent block storage volumes Config drive 3.1. Instance disk The instance disk created to store instance data depends on the boot source that you use to create the instance. The instance disk of an instance that you boot from an image is controlled by the Compute service and deleted when the instance is deleted. The instance disk of an instance that you boot from a volume is a persistent volume provided by the Block Storage service. 3.2. Instance ephemeral storage You can specify that an ephemeral disk is created for the instance by choosing a flavor that configures an ephemeral disk. This ephemeral storage is an empty additional disk that is available to an instance. This storage value is defined by the instance flavor. The default value is 0, meaning that no secondary ephemeral storage is created. The ephemeral disk appears in the same way as a plugged-in hard drive or thumb drive. It is available as a block device, which you can check using the lsblk command. You can mount it and use it however you normally use a block device. You cannot preserve or reference that disk beyond the instance it is attached to. Note Ephemeral storage data is not included in instance snapshots, and is not available on instances that are shelved and then unshelved. 3.3. Instance swap storage You can specify that a swap disk is created for the instance by choosing a flavor that configures a swap disk. This swap storage is an additional disk that is available to the instance for use as swap space for the running operating system. 3.4. Instance block storage A block storage volume is persistent storage that is available to an instance regardless of the state of the running instance. You can attach multiple block devices to an instance, one of which can be a bootable volume. Note When you use a block storage volume for your instance disk data, the block storage volume persists for any instance rebuilds, even when an instance is rebuilt with a new image that requests that a new volume is created. 3.5. Config drive You can attach a config drive to an instance when it boots. The config drive is presented to the instance as a read-only drive. The instance can mount this drive and read files from it. You can use the config drive as a source for cloud-init information. Config drives are useful when combined with cloud-init for server bootstrapping, and when you want to pass large files to your instances. For example, you can configure cloud-init to automatically mount the config drive and run the setup scripts during the initial instance boot. Config drives are created with the volume label of config-2 , and attached to the instance when it boots. The contents of any additional files passed to the config drive are added to the user_data file in the openstack/{version}/ directory of the config drive. cloud-init retrieves the user data from this file. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/creating_and_managing_instances/con_types-of-instance-storage_osp |
Part II. Debugging Routing Contexts | Part II. Debugging Routing Contexts The Camel debugger includes many features for debugging locally and remotely running routing contexts: Setting conditional and unconditional breakpoints on nodes in the route editor Autolaunching the debugger and switching to the Debug perspective Interacting with the running routing context: Switch between breakpoints to quickly compare variable values of message instances Examine and change the value of variables of interest Add variables of interest to the watch list to track them throughout the debug session Disable and re-enable breakpoints on-the-fly Track message flow graphically in the routing context runtime Examine console logs to track Camel and debugger actions Note Before you can run the Camel debugger, you must set breakpoints on the nodes of interest displayed on the route editor's canvas. Then you can run the Camel debugger on a project's routing context .xml file to find the logic errors in it and fix them. Invoking the Camel debugger runs the routing context in debug mode and opens the Debug Perspective . | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/RiderCamelDebugger |
Part V. Debugging Applications | Part V. Debugging Applications Debugging applications is a very wide topic. This part provides developers with the most common techniques for debugging in multiple situations. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/debugging |
Chapter 27. Automating group membership using IdM Web UI | Chapter 27. Automating group membership using IdM Web UI Using automatic group membership enables you to assign users and hosts to groups automatically based on their attributes. For example, you can: Divide employees' user entries into groups based on the employees' manager, location, or any other attribute. Divide hosts based on their class, location, or any other attribute. Add all users or all hosts to a single global group. This chapter covers the following topics: Benefits of automatic group membership Automember rules Adding an automember rule using IdM Web UI Adding a condition to an automember rule using IdM Web UI Viewing existing automember rules and conditions using IdM Web UI Deleting an automember rule using IdM Web UI Removing a condition from an automember rule using IdM Web UI Applying automember rules to existing entries using IdM Web UI Configuring a default user group using IdM Web UI Configuring a default host group using IdM Web UI 27.1. Benefits of automatic group membership Using automatic membership for users allows you to: Reduce the overhead of manually managing group memberships You no longer have to assign every user and host to groups manually. Improve consistency in user and host management Users and hosts are assigned to groups based on strictly defined and automatically evaluated criteria. Simplify the management of group-based settings Various settings are defined for groups and then applied to individual group members, for example sudo rules, automount, or access control. Adding users and hosts to groups automatically makes managing these settings easier. 27.2. Automember rules When configuring automatic group membership, the administrator defines automember rules. An automember rule applies to a specific user or host target group. It cannot apply to more than one group at a time. After creating a rule, the administrator adds conditions to it. These specify which users or hosts get included or excluded from the target group: Inclusive conditions When a user or host entry meets an inclusive condition, it will be included in the target group. Exclusive conditions When a user or host entry meets an exclusive condition, it will not be included in the target group. The conditions are specified as regular expressions in the Perl-compatible regular expressions (PCRE) format. For more information about PCRE, see the pcresyntax(3) man page on your system. Note IdM evaluates exclusive conditions before inclusive conditions. In case of a conflict, exclusive conditions take precedence over inclusive conditions. An automember rule applies to every entry created in the future. These entries will be automatically added to the specified target group. If an entry meets the conditions specified in multiple automember rules, it will be added to all the corresponding groups. Existing entries are not affected by the new rule. If you want to change existing entries, see Applying automember rules to existing entries using IdM Web UI . 27.3. Adding an automember rule using IdM Web UI Follow this procedure to add an automember rule using the IdM Web UI. For information about automember rules, see Automember rules . Note Existing entries are not affected by the new rule. If you want to change existing entries, see Applying automember rules to existing entries using IdM Web UI . Prerequisites You are logged in to the IdM Web UI. You must be a member of the admins group. The target group of the new rule exists in IdM. Procedure Click Identity Automember , and select either User group rules or Host group rules . Click Add . In the Automember rule field, select the group to which the rule will apply. This is the target group name. Click Add to confirm. Optional: You can add conditions to the new rule using the procedure described in Adding a condition to an automember rule using IdM Web UI . 27.4. Adding a condition to an automember rule using IdM Web UI After configuring automember rules, you can then add a condition to that automember rule using the IdM Web UI. For information about automember rules, see Automember rules . Prerequisites You are logged in to the IdM Web UI. You must be a member of the admins group. The target rule exists in IdM. Procedure Click Identity Automember , and select either User group rules or Host group rules . Click on the rule to which you want to add a condition. In the Inclusive or Exclusive sections, click Add. In the Attribute field, select the required attribute, for example uid . In the Expression field, define a regular expression. Click Add . For example, the following condition targets all users with any value (.*) in their user ID (uid) attribute. 27.5. Viewing existing automember rules and conditions using IdM Web UI Follow this procedure to view existing automember rules and conditions using the IdM Web UI. Prerequisites You are logged in to the IdM Web UI. You must be a member of the admins group. Procedure Click Identity Automember , and select either User group rules or Host group rules to view the respective automember rules. Optional: Click on a rule to see the conditions for that rule in the Inclusive or Exclusive sections. 27.6. Deleting an automember rule using IdM Web UI Follow this procedure to delete an automember rule using the IdM Web UI. Deleting an automember rule also deletes all conditions associated with the rule. To remove only specific conditions from a rule, see Removing a condition from an automember rule using IdM Web UI . Prerequisites You are logged in to the IdM Web UI. You must be a member of the admins group. Procedure Click Identity Automember , and select either User group rules or Host group rules to view the respective automember rules. Select the check box to the rule you want to remove. Click Delete . Click Delete to confirm. 27.7. Removing a condition from an automember rule using IdM Web UI Follow this procedure to remove a specific condition from an automember rule using the IdM Web UI. Prerequisites You are logged in to the IdM Web UI. You must be a member of the admins group. Procedure Click Identity Automember , and select either User group rules or Host group rules to view the respective automember rules. Click on a rule to see the conditions for that rule in the Inclusive or Exclusive sections. Select the check box to the conditions you want to remove. Click Delete . Click Delete to confirm. 27.8. Applying automember rules to existing entries using IdM Web UI Automember rules apply automatically to user and host entries created after the rules were added. They are not applied retroactively to entries that existed before the rules were added. To apply automember rules to previously added entries, you have to manually rebuild automatic membership. Rebuilding automatic membership re-evaluates all existing automember rules and applies them either to all user or hosts entries, or to specific entries. Note Rebuilding automatic membership does not remove user or host entries from groups, even if the entries no longer match the group's inclusive conditions. To remove them manually, see Removing a member from a user group using IdM Web UI or Removing host group members in the IdM Web UI . 27.8.1. Rebuilding automatic membership for all users or hosts Follow this procedure to rebuild automatic membership for all user or host entries. Prerequisites You are logged in to the IdM Web UI. You must be a member of the admins group. Procedure Select Identity Users or Hosts . Click Actions Rebuild auto membership . 27.8.2. Rebuilding automatic membership for a single user or host only Follow this procedure to rebuild automatic membership for a specific user or host entry. Prerequisites You are logged in to the IdM Web UI. You must be a member of the admins group. Procedure Select Identity Users or Hosts . Click on the required user or host name. Click Actions Rebuild auto membership . 27.9. Configuring a default user group using IdM Web UI When you configure a default user group, new user entries that do not match any automember rule are automatically added to this default group. Prerequisites You are logged in to the IdM Web UI. You must be a member of the admins group. The target user group you want to set as default exists in IdM. Procedure Click Identity Automember , and select User group rules . In the Default user group field, select the group you want to set as the default user group. 27.10. Configuring a default host group using IdM Web UI When you configure a default host group, new host entries that do not match any automember rule are automatically added to this default group. Prerequisites You are logged in to the IdM Web UI. You must be a member of the admins group. The target host group you want to set as default exists in IdM. Procedure Click Identity Automember , and select Host group rules . In the Default host group field, select the group you want to set as the default host group. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/automating-group-membership-using-idm-web-ui_managing-users-groups-hosts |
Chapter 2. Major changes in version 1.8 | Chapter 2. Major changes in version 1.8 Be aware of the following differences between Red Hat Hyperconverged Infrastructure for Virtualization 1.8 and versions: Changed behavior Red Hat Hyperconverged Infrastructure for Virtualization 1.8 and Red Hat Virtualization 4.4 are based on Red Hat Enterprise Linux 8. Read about the key differences in Red Hat Enterprise Linux 8 in Considerations in adopting RHEL 8 . Cluster upgrades now require at least 10 percent free space on gluster disks in order to reduce the risk of running out of space mid-upgrade. This is available post the upgrade to Red Hat Hyperconverged Infrastructure for Virtualization 1.8. ( BZ#1783750 ) "Hosts" and "Additional Hosts" tabs in the Web Console have been combined into a new "Hosts" tab that shows information previously shown on both. ( BZ#1762804 ) Readcache and readcachesize options have been removed from VDO volumes, as they are not supported on Red Hat Enterprise Linux 8 based operating systems. ( BZ#1808081 ) The Quartz scheduler is replaced with the standard Java scheduler to match support with Red Hat Virtualization. ( BZ#1797487 ) Enhancements The Administrator Portal can now upgrade all hosts in a cluster with one click. This is available post the upgrade to Red Hat Hyperconverged Infrastructure for Virtualization 1.8. ( BZ#1721366 ) At-rest encryption using Network-Bound Disk Encryption (NBDE) is now supported on new Red Hat Hyperconverged Infrastructure for Virtualization deployments. ( BZ#1821248 , BZ#1781184 ) Added support for IPv6 networking. Environments with both IPv4 and IPv6 addresses are not supported. ( BZ#1721383 ) New roles, playbooks, and inventory examples are available to simplify and automate the following tasks: Upgrading ( BZ#1500728 , BZ#1832654 ) Backing up and restoring configuration ( BZ#1850488 ) Replacing hosts ( BZ#1840123 ) Blacklisting multipath devices ( BZ#1807808 ) Creating the gluster logical network ( BZ#1832966 ) Deploying on IPv6 networks. ( BZ#1688217 ) Added an option to select IPv4 or IPv6 based deployment in the web console. ( BZ#1688798 ) fsync in the replication module now uses eager-lock functionality which improves the performance of small-block of size approximately equal to 4k write-heavy workloads by more than 50 percent on Red Hat Hyperconverged Infrastructure for Virtualization 1.8. ( BZ#1836164 ) The web console now supports blacklisting multipath devices. ( BZ#1814120 ) New fencing policies skip_fencing_if_gluster_bricks_up and skip_fencing_if_gluster_quorum_not_met are now added and enabled by default. ( BZ#1775552 ) Red Hat Hyperconverged Infrastructure for Virtualization now ensures that the "performance.strict-o-direct" option in Red Hat Gluster Storage is enabled before creating a storage domain. ( BZ#1807400 ) Red Hat Gluster Storage volume options can now be set for all volumes in the Administrator Portal by using "all" as the volume name. ( BZ#1775586 ) Read-only fields are no longer included in the web console user interface, making the interface simpler and easier to read. ( BZ#1814553 ) | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/upgrading_red_hat_hyperconverged_infrastructure_for_virtualization/major-changes-180 |
Chapter 10. Troubleshooting | Chapter 10. Troubleshooting The OpenTelemetry Collector offers multiple ways to measure its health as well as investigate data ingestion issues. 10.1. Collecting diagnostic data from the command line When submitting a support case, it is helpful to include diagnostic information about your cluster to Red Hat Support. You can use the oc adm must-gather tool to gather diagnostic data for resources of various types, such as OpenTelemetryCollector , Instrumentation , and the created resources like Deployment , Pod , or ConfigMap . The oc adm must-gather tool creates a new pod that collects this data. Procedure From the directory where you want to save the collected data, run the oc adm must-gather command to collect the data: USD oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- \ /usr/bin/must-gather --operator-namespace <operator_namespace> 1 1 The default namespace where the Operator is installed is openshift-opentelemetry-operator . Verification Verify that the new directory is created and contains the collected data. 10.2. Getting the OpenTelemetry Collector logs You can get the logs for the OpenTelemetry Collector as follows. Procedure Set the relevant log level in the OpenTelemetryCollector custom resource (CR): config: service: telemetry: logs: level: debug 1 1 Collector's log level. Supported values include info , warn , error , or debug . Defaults to info . Use the oc logs command or the web console to retrieve the logs. 10.3. Exposing the metrics The OpenTelemetry Collector exposes the metrics about the data volumes it has processed. The following metrics are for spans, although similar metrics are exposed for metrics and logs signals: otelcol_receiver_accepted_spans The number of spans successfully pushed into the pipeline. otelcol_receiver_refused_spans The number of spans that could not be pushed into the pipeline. otelcol_exporter_sent_spans The number of spans successfully sent to the destination. otelcol_exporter_enqueue_failed_spans The number of spans failed to be added to the sending queue. The Operator creates a <cr_name>-collector-monitoring telemetry service that you can use to scrape the metrics endpoint. Procedure Enable the telemetry service by adding the following lines in the OpenTelemetryCollector custom resource (CR): # ... config: service: telemetry: metrics: address: ":8888" 1 # ... 1 The address at which the internal collector metrics are exposed. Defaults to :8888 . Retrieve the metrics by running the following command, which uses the port-forwarding Collector pod: USD oc port-forward <collector_pod> In the OpenTelemetryCollector CR, set the enableMetrics field to true to scrape internal metrics: apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: # ... mode: deployment observability: metrics: enableMetrics: true # ... Depending on the deployment mode of the OpenTelemetry Collector, the internal metrics are scraped by using PodMonitors or ServiceMonitors . Note Alternatively, if you do not set the enableMetrics field to true , you can access the metrics endpoint at http://localhost:8888/metrics . On the Observe page in the web console, enable User Workload Monitoring to visualize the scraped metrics. Note Not all processors expose the required metrics. In the web console, go to Observe Dashboards and select the OpenTelemetry Collector dashboard from the drop-down list to view it. Tip You can filter the visualized data such as spans or metrics by the Collector instance, namespace, or OpenTelemetry components such as processors, receivers, or exporters. 10.4. Debug Exporter You can configure the Debug Exporter to export the collected data to the standard output. Procedure Configure the OpenTelemetryCollector custom resource as follows: config: exporters: debug: verbosity: detailed service: pipelines: traces: exporters: [debug] metrics: exporters: [debug] logs: exporters: [debug] Use the oc logs command or the web console to export the logs to the standard output. 10.5. Using the Network Observability Operator for troubleshooting You can debug the traffic between your observability components by visualizing it with the Network Observability Operator. Prerequisites You have installed the Network Observability Operator as explained in "Installing the Network Observability Operator". Procedure In the OpenShift Container Platform web console, go to Observe Network Traffic Topology . Select Namespace to filter the workloads by the namespace in which your OpenTelemetry Collector is deployed. Use the network traffic visuals to troubleshoot possible issues. See "Observing the network traffic from the Topology view" for more details. Additional resources Installing the Network Observability Operator Observing the network traffic from the Topology view 10.6. Troubleshooting the instrumentation To troubleshoot the instrumentation, look for any of the following issues: Issues with instrumentation injection into your workload Issues with data generation by the instrumentation libraries 10.6.1. Troubleshooting instrumentation injection into your workload To troubleshoot instrumentation injection, you can perform the following activities: Checking if the Instrumentation object was created Checking if the init-container started Checking if the resources were deployed in the correct order Searching for errors in the Operator logs Double-checking the pod annotations Procedure Run the following command to verify that the Instrumentation object was successfully created: USD oc get instrumentation -n <workload_project> 1 1 The namespace where the instrumentation was created. Run the following command to verify that the opentelemetry-auto-instrumentation init-container successfully started, which is a prerequisite for instrumentation injection into workloads: USD oc get events -n <workload_project> 1 1 The namespace where the instrumentation is injected for workloads. Example output ... Created container opentelemetry-auto-instrumentation ... Started container opentelemetry-auto-instrumentation Verify that the resources were deployed in the correct order for the auto-instrumentation to work correctly. The correct order is to deploy the Instrumentation custom resource (CR) before the application. For information about the Instrumentation CR, see the section "Configuring the instrumentation". Note When the pod starts, the Red Hat build of OpenTelemetry Operator checks the Instrumentation CR for annotations containing instructions for injecting auto-instrumentation. Generally, the Operator then adds an init-container to the application's pod that injects the auto-instrumentation and environment variables into the application's container. If the Instrumentation CR is not available to the Operator when the application is deployed, the Operator is unable to inject the auto-instrumentation. Fixing the order of deployment requires the following steps: Update the instrumentation settings. Delete the instrumentation object. Redeploy the application. Run the following command to inspect the Operator logs for instrumentation errors: USD oc logs -l app.kubernetes.io/name=opentelemetry-operator --container manager -n openshift-opentelemetry-operator --follow Troubleshoot pod annotations for the instrumentations for a specific programming language. See the required annotation fields and values in "Configuring the instrumentation". Verify that the application pods that you are instrumenting are labeled with correct annotations and the appropriate auto-instrumentation settings have been applied. Example Example command to get pod annotations for an instrumented Python application USD oc get pods -n <workload_project> -o jsonpath='{range .items[?(@.metadata.annotations["instrumentation.opentelemetry.io/inject-python"]=="true")]}{.metadata.name}{"\n"}{end}' Verify that the annotation applied to the instrumentation object is correct for the programming language that you are instrumenting. If there are multiple instrumentations in the same namespace, specify the name of the Instrumentation object in their annotations. Example If the Instrumentation object is in a different namespace, specify the namespace in the annotation. Example Verify that the OpenTelemetryCollector custom resource specifies the auto-instrumentation annotations under spec.template.metadata.annotations . If the auto-instrumentation annotations are in spec.metadata.annotations instead, move them into spec.template.metadata.annotations . 10.6.2. Troubleshooting telemetry data generation by the instrumentation libraries You can troubleshoot telemetry data generation by the instrumentation libraries by checking the endpoint, looking for errors in your application logs, and verifying that the Collector is receiving the telemetry data. Procedure Verify that the instrumentation is transmitting data to the correct endpoint: USD oc get instrumentation <instrumentation_name> -n <workload_project> -o jsonpath='{.spec.endpoint}' The default endpoint http://localhost:4317 for the Instrumentation object is only applicable to a Collector instance that is deployed as a sidecar in your application pod. If you are using an incorrect endpoint, correct it by editing the Instrumentation object and redeploying your application. Inspect your application logs for error messages that might indicate that the instrumentation is malfunctioning: USD oc logs <application_pod> -n <workload_project> If the application logs contain error messages that indicate that the instrumentation might be malfunctioning, install the OpenTelemetry SDK and libraries locally. Then run your application locally and troubleshoot for issues between the instrumentation libraries and your application without OpenShift Container Platform. Use the Debug Exporter to verify that the telemetry data is reaching the destination OpenTelemetry Collector instance. For more information, see "Debug Exporter". | [
"oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- /usr/bin/must-gather --operator-namespace <operator_namespace> 1",
"config: service: telemetry: logs: level: debug 1",
"config: service: telemetry: metrics: address: \":8888\" 1",
"oc port-forward <collector_pod>",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true",
"config: exporters: debug: verbosity: detailed service: pipelines: traces: exporters: [debug] metrics: exporters: [debug] logs: exporters: [debug]",
"oc get instrumentation -n <workload_project> 1",
"oc get events -n <workload_project> 1",
"... Created container opentelemetry-auto-instrumentation ... Started container opentelemetry-auto-instrumentation",
"oc logs -l app.kubernetes.io/name=opentelemetry-operator --container manager -n openshift-opentelemetry-operator --follow",
"instrumentation.opentelemetry.io/inject-python=\"true\"",
"oc get pods -n <workload_project> -o jsonpath='{range .items[?(@.metadata.annotations[\"instrumentation.opentelemetry.io/inject-python\"]==\"true\")]}{.metadata.name}{\"\\n\"}{end}'",
"instrumentation.opentelemetry.io/inject-nodejs: \"<instrumentation_object>\"",
"instrumentation.opentelemetry.io/inject-nodejs: \"<other_namespace>/<instrumentation_object>\"",
"oc get instrumentation <instrumentation_name> -n <workload_project> -o jsonpath='{.spec.endpoint}'",
"oc logs <application_pod> -n <workload_project>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/red_hat_build_of_opentelemetry/otel-troubleshoot |
Chapter 1. Red Hat OpenShift support for Windows Containers overview | Chapter 1. Red Hat OpenShift support for Windows Containers overview You can add Windows nodes either by creating a compute machine set or by specifying existing Bring-Your-Own-Host (BYOH) Window instances through a configuration map . Note Compute machine sets are not supported for bare metal or provider agnostic clusters. For workloads including both Linux and Windows, OpenShift Container Platform allows you to deploy Windows workloads running on Windows Server containers while also providing traditional Linux workloads hosted on Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL). For more information, see getting started with Windows container workloads . You need the WMCO to run Windows workloads in your cluster. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. For more information, see how to enable Windows container workloads . You can create a Windows MachineSet object to create infrastructure Windows machine sets and related machines so that you can move supported Windows workloads to the new Windows machines. You can create a Windows MachineSet object on multiple platforms. You can schedule Windows workloads to Windows compute nodes. You can perform Windows Machine Config Operator upgrades to ensure that your Windows nodes have the latest updates. You can remove a Windows node by deleting a specific machine. You can use Bring-Your-Own-Host (BYOH) Windows instances to repurpose Windows Server VMs and bring them to OpenShift Container Platform. BYOH Windows instances benefit users who are looking to mitigate major disruptions in the event that a Windows server goes offline. You can use BYOH Windows instances as nodes on OpenShift Container Platform 4.8 and later versions. You can disable Windows container workloads by performing the following: Uninstalling the Windows Machine Config Operator Deleting the Windows Machine Config Operator namespace | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/windows_container_support_for_openshift/windows-container-overview |
8.5. Managing Organizational Units | 8.5. Managing Organizational Units Administrators can use organizational units (OU) as a container for directory entries. For example, you can use OUs to separate user and group entries. To manage OUs in Directory Server, use the dsidm organizationalunit command. To create an OU, enter: To list the OUs in an entry, enter: To rename an OU, enter: To delete an OU, enter: | [
"dsidm -D \"cn=Directory Manager\" ldap://server.example.com -b \" dc=example,dc=com \" organizationalunit create --ou OU_name",
"dsidm -D \"cn=Directory Manager\" ldap://server.example.com -b \" dc=example,dc=com \" organizationalunit list People",
"dsidm -D \"cn=Directory Manager\" ldap://server.example.com -b \" dc=example,dc=com \" organizationalunit rename old_name new_name",
"dsidm -D \"cn=Directory Manager\" ldap://server.example.com -b \" dc=example,dc=com \" organizationalunit delete OU_name"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/manaing-organizational-units |
4.9. Global Transactions | 4.9. Global Transactions Global or client XA transactions allow the JBoss Data Virtualization JDBC API to participate in transactions that are beyond the scope of a single client resource. For this, use the org.teiid.jdbc.TeiidDataSource class for establishing connections. When the data source class is used in the context of a user transaction in an application server, such as JBoss, WebSphere, or Weblogic, the resulting connection will already be associated with the current XA transaction. No additional client JDBC code is necessary to interact with the XA transaction. The following code demonstrates usage of UserTransactions. The following code demonstrates manual usage of XA transactions. With the use of global transactions, multiple XAConnections may participate in the same transaction. It is important to note that the JDBC XAResource isSameRM() method only returns true if connections are made to the same server instance in a cluster. If the JBoss Data Virtualization connections are to different server instances then transactional behavior may not be the same as if they were to the same cluster member. For example, if the client transaction manager uses the same XID for each connection, duplicate XID exceptions may arise from the same physical source accessed through different cluster members. If the client transaction manager uses a different branch identifier for each connection, issues may arise with sources that lock or isolate changes based upon branch identifiers. | [
"UserTransaction ut = context.getUserTransaction(); try { ut.begin(); Datasource oracle = lookup(...) Datasource teiid = lookup(...) Connection c1 = oracle.getConnection(); Connection c2 = teiid.getConnection(); // do something with Oracle connection // do something with Teiid connection c1.close(); c2.close(); ut.commit(); } catch (Exception ex) { ut.rollback(); }",
"XAConnection xaConn = null; XAResource xaRes = null; Connection conn = null; Statement stmt = null; try { xaConn = <XADataSource instance>.getXAConnection(); xaRes = xaConn.getXAResource(); Xid xid = <new Xid instance>; conn = xaConn.getConnection(); stmt = conn.createStatement(); xaRes.start(xid, XAResource.TMNOFLAGS); stmt.executeUpdate(\"insert into a\\u0080¦\"); // other statements on this connection or other resources enlisted in this transaction // xaRes.end(xid, XAResource.TMSUCCESS); if (xaRes.prepare(xid) == XAResource.XA_OK) { xaRes.commit(xid, false); } } catch (XAException e) { xaRes.rollback(xid); } finally { // clean up code // }"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/global_transactions1 |
3.10. Kernel | 3.10. Kernel kernel component Intel Xeon E5-XXXX V2 Series Processor running on the C600 chipset is not supported in Red Hat Enterprise Linux 6.3. An "unsupported hardware" message can therefore be reported by the kernel. kernel component The Red Hat Enterprise Linux 6.3 kernels upgraded the mlx4 modules to a later version. If the modules are used together with, for example, the HP InfiniBand Enablement Kit, the behavior is different. Consequently, certain Mellanox cards do not come up with network interfaces on Red Hat Enterprise Linux 6.3. To work around this problem, the mlx7_core module has to be loaded with the port_type_array option and a 2 parameter for each used InfiniBand card. Follow this example to manually load the driver for two cards in the system: The last of the above commands will show the new interfaces. To configure these parameters to be applied by the system when the modules are loaded, run: kernel component When using Chelsio's iSCSI HBAs for an iSCSI root partition, the first boot after install fails. This occurs because Chelsio's iSCSI HBA is not properly detected. To work around this issue, users must add the iscsi_firmware parameter to grub's kernel command line. This will signal to dracut to boot from the iSCSI HBA. kernel component In Red Hat Enterprise Linux 6.3, three module parameters ( num_lro , rss_mask , and rss_xor ) that were supported by older versions of the mlx4_en driver have become obsolete and are no longer used. If you supply these parameters, the Red Hat Enterprise Linux 6.3 driver will ignore them and log a warning. kernel component Due to a race condition, in certain cases, writes to RAID4/5/6 while the array is reconstructing could hang the system. kernel component The installation of Red Hat Enterprise Linux 6.3 i386 may occasionally fail. To work around this issue, add the following parameter to the kernel command line: kernel component If a device reports an error, while it is opened (via the open(2) system call), then the device is closed (via the close(2) system call), and the /dev/disk/by-id link for the device may be removed. When the problem on the device that caused the error is resolved, the by-id link is not re-created. To work around this issue, run the following command: kernel component Platforms with BIOS/UEFI that are unaware of PCI-e SR-IOV capabilities may fail to enable virtual functions kernel component When an HBA that uses the mpt2sas driver is connected to a storage using an SAS switch LSI SAS 6160, the driver may become unresponsive during Controller Fail Drive Fail (CFDF) testing. This is due to faulty firmware that is present on the switch. To fix this issue, use a newer version (14.00.00.00 or later) of firmware for the LSI SAS 6160 switch. kernel component, BZ# 690523 If appropriate SCSI device handlers ( scsi_dh modules) are not available when the storage driver (for example, lpfc ) is first loaded, I/O operations may be issued to SCSI multipath devices that are not ready for those I/O operations. This can result in significant delays during system boot and excessive I/O error messages in the kernel log. Provided the storage driver is loaded before multipathd is started (which is the default behavior), users can work around this issue by making sure the appropriate SCSI device handlers ( scsi_dh modules) are available by specifying one of the following kernel command line parameters which dracut consumes: Note that the order of the listed scsi_dh modules does not matter. Specifying one of the above parameters causes the scsi_dh module(s) to load before the storage driver is loaded or multipath is started. kernel component, BZ# 745713 In some cases, Red Hat Enterprise Linux 6 guests running fully-virtualized under Red Hat Enterprise Linux 5 experience a time drift or fail to boot. In other cases, drifting may start after migration of the virtual machine to a host with different speed. This is due to limitations in the Red Hat Enterprise Linux 5 Xen hypervisor. To work around this, add the nohpet parameter or, alternatively, the clocksource=jiffies parameter to the kernel command line of the guest. Or, if running under Red Hat Enterprise Linux 5.7 or newer, locate the guest configuration file for the guest and add the hpet=0 parameter in it. kernel component On some systems, Xen full-virt guests may print the following message when booting: It is possible to avoid the memory trimming by using the disable_mtrr_trim kernel command line option. kernel component The perf record command becomes unresponsive when specifying a tracepoint event and a hardware event at the same time. kernel component On 64-bit PowerPC, the following command may cause kernel panic: kernel component Applications are increasingly using more than 1024 file descriptors. It is not recommended to increase the default soft limit of file descriptors because it may break applications that use the select() call. However, it is safe to increase the default hard limit; that way, applications requiring a large amount of file descriptors can increase their soft limit without needing root privileges and without any user intervention. kernel component, BZ# 770545 In Red Hat Enterprise Linux 6.2 and Red Hat Enterprise Linux 6.3, the default value for sysctl vm.zone_reclaim_mode is now 0 , whereas in Red Hat Enterprise Linux 6.1 it was 1 . kernel component Using Alsa with an HDA Intel sound card and the Conexant CX20585 codec causes sound and recording failures. To work around this issue, add the following line to the /etc/modprobe.d/dist-alsa.conf file: kernel component In network only use of Brocade Converged Network Adapters (CNAs), switches that are not properly configured to work with Brocade FCoE functionality can cause a continuous linkup/linkdown condition. This causes continuous messages on the host console: To work around this issue, unload the Brocade bfa driver. kernel component The lpfc driver is deprecating the sysfs mbox interface as it is no longer used by the Emulex tools. Reads and writes are now stubbed out and only return the -EPERM (Operation not permitted) symbol. kernel component In Red Hat Enterprise Linux 6, a legacy bug in the PowerEdge Expandable RAID Controller 5 (PERC5) which causes the kdump kernel to fail to scan for scsi devices. It is usually triggered when a large amounts of I/O operations are pending on the controller in the first kernel before performing a kdump. kernel component, BZ# 679262 In Red Hat Enterprise Linux 6.2 and later, due to security concerns, addresses in /proc/kallsyms and /proc/modules show all zeros when accessed by a non-root user. kernel component Superfluous information is displayed on the console due to a correctable machine check error occurring. This information can be safely ignored by the user. Machine check error reporting can be disabled by using the nomce kernel boot option, which disables machine check error reporting, or the mce=ignore_ce kernel boot option, which disables correctable machine check error reporting. kernel component The order in which PCI devices are scanned may change from one major Red Hat Enterprise Linux release to another. This may result in device names changing, for example, when upgrading from Red Hat Enterprise Linux 5 to 6. You must confirm that a device you refer to during installation, is the intended device. One way to assure the correctness of device names is to, in some configurations, determine the mapping from the controller name to the controller's PCI address in the older release, and then compare this to the mapping in the newer release, to ensure that the device name is as expected. The following is an example from /var/log/messages: If the device name is incorrect, add the pci=bfsort parameter to the kernel command line, and check again. kernel component Enabling CHAP (Challenge-Handshake Authentication Protocol) on an iSCSI target for the be2iscsi driver results in kernel panic. To work around this issue, disable CHAP on the iSCSI target. kernel component Newer VPD (Vital Product Data) blocks can exceed the size the tg3 driver normally handles. As a result, some of the routines that operate on the VPD blocks may fail. For example, the nvram test fails when running the ethtool -t command on BCM5719 and BCM5720 Ethernet Controllers. kernel component Running the ethtool -t command on BCM5720 Ethernet controllers causes a loopback test failure because the tg3 driver does not wait long enough for a link. kernel component The tg3 driver in Red Hat Enterprise Linux 6.2 does not include support for Jumbo frames and TSO (TCP Segmentation Offloading) on BCM5719 Ethernet controllers. As a result, the following error message is returned when attempting to configure, for example, Jumbo frames: kernel component The default interrupt configuration for the Emulex LPFC FC/FCoE driver has changed from INT-X to MSI-X. This is reflected by the lpfc_use_msi module parameter (in /sys/class/scsi_host/host#/lpfc_use_msi ) being set to 2 by default, instead of the 0 . Two issues provide motivation for this change: SR-IOV capability only works with the MSI-X interrupt mode, and certain recent platforms only support MSI or MSI-X. However, the change to the LPFC default interrupt mode can bring out host problems where MSI/MSI-X support is not fully functional. Other host problems can exist when running in the INT-X mode. If any of the following symptoms occur after upgrading to, or installing Red Hat Enterprise Linux 6.2 with an Emulex LPFC adapter in the system, change the value of the lpfc module parameter, lpfc_use_msi , to 0 : The initialization or attachment of the lpfc adapter may fail with mailbox errors. As a result, the lpfc adapter is not configured on the system. The following message appear in /var/log/messages : While the lpfc adapter is operating, it may fail with mailbox errors, resulting in the inability to access certain devices. The following message appear in /var/log/messages : Performing a warm reboot causes any subsequent boots to halt or stop because the BIOS is detecting the lpfc adapter. The system BIOS logs the following messages: kernel component The minimum firmware version for NIC adapters managed by netxen_nic is 4.0.550. This includes the boot firmware which is flashed in option ROM on the adapter itself. kernel component, BZ# 683012 High stress on 64-bit IBM POWER series machines prevents kdump from successfully capturing the vmcore . As a result, the second kernel is not loaded, and the system becomes unresponsive. kernel component Triggering kdump to capture a vmcore through the network using the Intel 82575EB ethernet device in a 32 bit environment causes the networking driver to not function properly in the kdump kernel, and prevent the vmcore from being captured. kernel component Memory Type Range Register (MTRR) setup on some hyperthreaded machines may be incorrect following a suspend/resume cycle. This can cause graphics performance (specifically, scrolling) to slow considerably after a suspend/resume cycle. To work around this issue, disable and then re-enable the hyperthreaded sibling CPUs around suspend/resume, for example: #!/bin/sh # Disable hyper-threading processor cores on suspend and hibernate, re-enable # on resume. # This file goes into /etc/pm/sleep.d/ case USD1 in hibernate|suspend) echo 0 > /sys/devices/system/cpu/cpu1/online echo 0 > /sys/devices/system/cpu/cpu3/online ;; thaw|resume) echo 1 > /sys/devices/system/cpu/cpu1/online echo 1 > /sys/devices/system/cpu/cpu3/online ;; esac kernel component In Red Hat Enterprise Linux 6.2, nmi_watchdog registers with the perf subsystem. Consequently, during boot, the perf subsystem grabs control of the performance counter registers, blocking OProfile from working. To resolve this, either boot with the nmi_watchdog=0 kernel parameter set, or run the following command to disable it at run time: To re-enable nmi-watchdog , use the following command kernel component, BZ# 603911 Due to the way ftrace works when modifying the code during start-up, the NMI watchdog causes too much noise and ftrace can not find a quiet period to instrument the code. Consequently, machines with more than 512 CPUs will encounter issues with the NMI watchdog. Such issues will return error messages similar to BUG: NMI Watchdog detected LOCKUP and have either ftrace_modify_code or ipi_handler in the backtrace. To work around this issue, disable NMI watchdog by setting the nmi_watchdog=0 kernel parameter, or using the following command at run time: kernel component On 64-bit POWER systems the EHEA NIC driver will fail when attempting to dump a vmcore via NFS. To work around this issue, utilize other kdump facilities, for example dumping to the local file system, or dumping over SSH. kernel component, BZ# 587909 A BIOS emulated floppy disk might cause the installation or kernel boot process to hang. To avoid this, disable emulated floppy disk support in the BIOS. kernel component The preferred method to enable nmi_watchdog on 32-bit x86 systems is to use either nmi_watchdog=2 or nmi_watchdog=lapic parameters. The parameter nmi_watchdog=1 is not supported. kernel component The kernel parameter, pci=noioapicquirk , is required when installing the 32-bit variant of Red Hat Enterprise Linux 6 on HP xw9300 workstations. Note that the parameter change is not required when installing the 64-bit variant. | [
"~]# rmmod mlx4_en ~]# rmmod mlx4_core ~]# modprobe mlx4_core port_type_array=2,2 ~]# modprobe mlx4_en ~]# ip a",
"~]# echo 'options mlx4_core port_type_array=2,2' >/etc/modprobe.d/mlx4_core.conf",
"vmalloc=256MB",
"~]# echo 'change' > /sys/class/block/sdX/uevent",
"rdloaddriver=scsi_dh_emc",
"rdloaddriver=scsi_dh_rdac,scsi_dh_hp_sw",
"rdloaddriver=scsi_dh_emc,scsi_dh_rdac,scsi_dh_alua",
"WARNING: BIOS bug: CPU MTRRs don't cover all of memory, losing <number>MB of RAM",
"~]# ./perf record -agT -e sched:sched_switch -F 100 -- sleep 3",
"options snd-hda-intel model=thinkpad",
"bfa xxxx:xx:xx.x: Base port (WWN = xx:xx:xx:xx:xx:xx:xx:xx) lost fabric connectivity",
"kernel: cciss0: <0x3230> at PCI 0000:1f:00.0 IRQ 71 using DAC ... kernel: cciss1: <0x3230> at PCI 0000:02:00.0 IRQ 75 using DAC",
"SIOCSIFMTU: Invalid argument",
"lpfc 0000:04:08.0: 0:0:0443 Adapter failed to set maximum DMA length mbxStatus x0 lpfc 0000:04:08.0: 0:0446 Adapter failed to init (255), mbxCmd x9 CFG_RING, mbxStatus x0, ring 0 lpfc 0000:04:08.0: 0:1477 Failed to set up hba ACPI: PCI interrupt for device 0000:04:08.0 disabled",
"lpfc 0000:0d:00.0: 0:0310 Mailbox command x5 timeout Data: x0 x700 xffff81039ddd0a00 lpfc 0000:0d:00.0: 0:0345 Resetting board due to mailbox timeout lpfc 0000:0d:00.0: 0:(0):2530 Mailbox command x23 cannot issue Data: xd00 x2",
"Installing Emulex BIOS ... Bringing the Link up, Please wait Bringing the Link up, Please wait",
"#!/bin/sh Disable hyper-threading processor cores on suspend and hibernate, re-enable on resume. This file goes into /etc/pm/sleep.d/ case USD1 in hibernate|suspend) echo 0 > /sys/devices/system/cpu/cpu1/online echo 0 > /sys/devices/system/cpu/cpu3/online ;; thaw|resume) echo 1 > /sys/devices/system/cpu/cpu1/online echo 1 > /sys/devices/system/cpu/cpu3/online ;; esac",
"echo 0 > /proc/sys/kernel/nmi_watchdog",
"echo 1 > /proc/sys/kernel/nmi_watchdog",
"echo 0 > /proc/sys/kernel/nmi_watchdog"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/kernel_issues |
Chapter 19. Using the mount Command | Chapter 19. Using the mount Command On Linux, UNIX, and similar operating systems, file systems on different partitions and removable devices (CDs, DVDs, or USB flash drives for example) can be attached to a certain point (the mount point ) in the directory tree, and then detached again. To attach or detach a file system, use the mount or umount command respectively. This chapter describes the basic use of these commands, as well as some advanced topics, such as moving a mount point or creating shared subtrees. 19.1. Listing Currently Mounted File Systems To display all currently attached file systems, use the following command with no additional arguments: This command displays the list of known mount points. Each line provides important information about the device name, the file system type, the directory in which it is mounted, and relevant mount options in the following form: device on directory type type ( options ) The findmnt utility, which allows users to list mounted file systems in a tree-like form, is also available from Red Hat Enterprise Linux 6.1. To display all currently attached file systems, run the findmnt command with no additional arguments: 19.1.1. Specifying the File System Type By default, the output of the mount command includes various virtual file systems such as sysfs and tmpfs . To display only the devices with a certain file system type, provide the -t option: Similarly, to display only the devices with a certain file system using the findmnt command: For a list of common file system types, see Table 19.1, "Common File System Types" . For an example usage, see Example 19.1, "Listing Currently Mounted ext4 File Systems" . Example 19.1. Listing Currently Mounted ext4 File Systems Usually, both / and /boot partitions are formatted to use ext4 . To display only the mount points that use this file system, use the following command: To list such mount points using the findmnt command, type: | [
"mount",
"findmnt",
"mount -t type",
"findmnt -t type",
"mount -t ext4 /dev/sda2 on / type ext4 (rw) /dev/sda1 on /boot type ext4 (rw)",
"findmnt -t ext4 TARGET SOURCE FSTYPE OPTIONS / /dev/sda2 ext4 rw,realtime,seclabel,barrier=1,data=ordered /boot /dev/sda1 ext4 rw,realtime,seclabel,barrier=1,data=ordered"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-mount-command |
4.7. Displaying LVM Information with the lvm Command | 4.7. Displaying LVM Information with the lvm Command The lvm command provides several built-in options that you can use to display information about LVM support and configuration. lvm devtypes Displays the recognized built-in block device types (Red Hat Enterprise Linux release 6.6 and later). lvm formats Displays recognizes metadata formats. lvm help Displays LVM help text. lvm segtypes Displays recognized logical volume segment types. lvm tags Displays any tags defined on this host. For information on LVM object tags, see Appendix D, LVM Object Tags . lvm version Displays the current version information. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lvmdisplaycommand |
Operations Guide | Operations Guide Red Hat Ceph Storage 8 Operational tasks for Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/operations_guide/index |
Introduction to JBoss EAP | Introduction to JBoss EAP Red Hat JBoss Enterprise Application Platform 7.4 Descriptions of general Red Hat JBoss Enterprise Application Platform concepts, including its subsystems and operating modes. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/introduction_to_jboss_eap/index |
Chapter 13. Bean Validator | Chapter 13. Bean Validator Only producer is supported The Validator component performs bean validation of the message body using the Java Bean Validation API (). Camel uses the reference implementation, which is Hibernate Validator . Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-bean-validator</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency> 13.1. URI format Where label is an arbitrary text value describing the endpoint. You can append query options to the URI in the following format, ?option=value&option=value&... 13.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 13.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 13.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 13.3. Component Options The Bean Validator component supports 8 options, which are listed below. Name Description Default Type ignoreXmlConfiguration (producer) Whether to ignore data from the META-INF/validation.xml file. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean constraintValidatorFactory (advanced) To use a custom ConstraintValidatorFactory. ConstraintValidatorFactory messageInterpolator (advanced) To use a custom MessageInterpolator. MessageInterpolator traversableResolver (advanced) To use a custom TraversableResolver. TraversableResolver validationProviderResolver (advanced) To use a a custom ValidationProviderResolver. ValidationProviderResolver validatorFactory (advanced) Autowired To use a custom ValidatorFactory. ValidatorFactory 13.4. Endpoint Options The Bean Validator endpoint is configured using URI syntax: with the following path and query parameters: 13.4.1. Path Parameters (1 parameters) Name Description Default Type label (producer) Required Where label is an arbitrary text value describing the endpoint. String 13.4.2. Query Parameters (8 parameters) Name Description Default Type group (producer) To use a custom validation group. javax.validation.groups.Default String ignoreXmlConfiguration (producer) Whether to ignore data from the META-INF/validation.xml file. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean constraintValidatorFactory (advanced) To use a custom ConstraintValidatorFactory. ConstraintValidatorFactory messageInterpolator (advanced) To use a custom MessageInterpolator. MessageInterpolator traversableResolver (advanced) To use a custom TraversableResolver. TraversableResolver validationProviderResolver (advanced) To use a a custom ValidationProviderResolver. ValidationProviderResolver validatorFactory (advanced) To use a custom ValidatorFactory. ValidatorFactory 13.5. OSGi deployment To use Hibernate Validator in the OSGi environment use dedicated ValidationProviderResolver implementation, just as org.apache.camel.component.bean.validator.HibernateValidationProviderResolver . The snippet below demonstrates this approach. You can also use HibernateValidationProviderResolver . Using HibernateValidationProviderResolver from("direct:test"). to("bean-validator://ValidationProviderResolverTest?validationProviderResolver=#myValidationProviderResolver"); <bean id="myValidationProviderResolver" class="org.apache.camel.component.bean.validator.HibernateValidationProviderResolver"/> If no custom ValidationProviderResolver is defined and the validator component has been deployed into the OSGi environment, the HibernateValidationProviderResolver will be automatically used. 13.6. Example Assumed we have a java bean with the following annotations Car.java public class Car { @NotNull private String manufacturer; @NotNull @Size(min = 5, max = 14, groups = OptionalChecks.class) private String licensePlate; // getter and setter } and an interface definition for our custom validation group OptionalChecks.java public interface OptionalChecks { } with the following Camel route, only the @NotNull constraints on the attributes manufacturer and licensePlate will be validated (Camel uses the default group javax.validation.groups.Default ). from("direct:start") .to("bean-validator://x") .to("mock:end") If you want to check the constraints from the group OptionalChecks , you have to define the route like this from("direct:start") .to("bean-validator://x?group=OptionalChecks") .to("mock:end") If you want to check the constraints from both groups, you have to define a new interface first AllChecks.java @GroupSequence({Default.class, OptionalChecks.class}) public interface AllChecks { } and then your route definition should looks like this from("direct:start") .to("bean-validator://x?group=AllChecks") .to("mock:end") And if you have to provide your own message interpolator, traversable resolver and constraint validator factory, you have to write a route like this <bean id="myMessageInterpolator" class="my.ConstraintValidatorFactory" /> <bean id="myTraversableResolver" class="my.TraversableResolver" /> <bean id="myConstraintValidatorFactory" class="my.ConstraintValidatorFactory" /> from("direct:start") .to("bean-validator://x?group=AllChecks&messageInterpolator=#myMessageInterpolator &traversableResolver=#myTraversableResolver&constraintValidatorFactory=#myConstraintValidatorFactory") .to("mock:end") It's also possible to describe your constraints as XML and not as Java annotations. In this case, you have to provide the file META-INF/validation.xml which could looks like this validation.xml <validation-config xmlns="http://jboss.org/xml/ns/javax/validation/configuration" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://jboss.org/xml/ns/javax/validation/configuration"> <default-provider>org.hibernate.validator.HibernateValidator</default-provider> <message-interpolator>org.hibernate.validator.engine.ResourceBundleMessageInterpolator</message-interpolator> <traversable-resolver>org.hibernate.validator.engine.resolver.DefaultTraversableResolver</traversable-resolver> <constraint-validator-factory>org.hibernate.validator.engine.ConstraintValidatorFactoryImpl</constraint-validator-factory> <constraint-mapping>/constraints-car.xml</constraint-mapping> </validation-config> and the constraints-car.xml file constraints-car.xml <constraint-mappings xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://jboss.org/xml/ns/javax/validation/mapping validation-mapping-1.0.xsd" xmlns="http://jboss.org/xml/ns/javax/validation/mapping"> <default-package>org.apache.camel.component.bean.validator</default-package> <bean class="CarWithoutAnnotations" ignore-annotations="true"> <field name="manufacturer"> <constraint annotation="javax.validation.constraints.NotNull" /> </field> <field name="licensePlate"> <constraint annotation="javax.validation.constraints.NotNull" /> <constraint annotation="javax.validation.constraints.Size"> <groups> <value>org.apache.camel.component.bean.validator.OptionalChecks</value> </groups> <element name="min">5</element> <element name="max">14</element> </constraint> </field> </bean> </constraint-mappings> Here is the XML syntax for the example route definition for OrderedChecks . Note that the body should include an instance of a class to validate. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <to uri="bean-validator://x?group=org.apache.camel.component.bean.validator.OrderedChecks"/> </route> </camelContext> </beans> 13.7. Spring Boot Auto-Configuration When using bean-validator with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-bean-validator-starter</artifactId> </dependency> The component supports 9 options, which are listed below. Name Description Default Type camel.component.bean-validator.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.bean-validator.constraint-validator-factory To use a custom ConstraintValidatorFactory. The option is a javax.validation.ConstraintValidatorFactory type. ConstraintValidatorFactory camel.component.bean-validator.enabled Whether to enable auto configuration of the bean-validator component. This is enabled by default. Boolean camel.component.bean-validator.ignore-xml-configuration Whether to ignore data from the META-INF/validation.xml file. false Boolean camel.component.bean-validator.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.bean-validator.message-interpolator To use a custom MessageInterpolator. The option is a javax.validation.MessageInterpolator type. MessageInterpolator camel.component.bean-validator.traversable-resolver To use a custom TraversableResolver. The option is a javax.validation.TraversableResolver type. TraversableResolver camel.component.bean-validator.validation-provider-resolver To use a a custom ValidationProviderResolver. The option is a javax.validation.ValidationProviderResolver type. ValidationProviderResolver camel.component.bean-validator.validator-factory To use a custom ValidatorFactory. The option is a javax.validation.ValidatorFactory type. ValidatorFactory | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-bean-validator</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>",
"bean-validator:label[?options]",
"bean-validator:label",
"from(\"direct:test\"). to(\"bean-validator://ValidationProviderResolverTest?validationProviderResolver=#myValidationProviderResolver\");",
"<bean id=\"myValidationProviderResolver\" class=\"org.apache.camel.component.bean.validator.HibernateValidationProviderResolver\"/>",
"public class Car { @NotNull private String manufacturer; @NotNull @Size(min = 5, max = 14, groups = OptionalChecks.class) private String licensePlate; // getter and setter }",
"public interface OptionalChecks { }",
"from(\"direct:start\") .to(\"bean-validator://x\") .to(\"mock:end\")",
"from(\"direct:start\") .to(\"bean-validator://x?group=OptionalChecks\") .to(\"mock:end\")",
"@GroupSequence({Default.class, OptionalChecks.class}) public interface AllChecks { }",
"from(\"direct:start\") .to(\"bean-validator://x?group=AllChecks\") .to(\"mock:end\")",
"<bean id=\"myMessageInterpolator\" class=\"my.ConstraintValidatorFactory\" /> <bean id=\"myTraversableResolver\" class=\"my.TraversableResolver\" /> <bean id=\"myConstraintValidatorFactory\" class=\"my.ConstraintValidatorFactory\" />",
"from(\"direct:start\") .to(\"bean-validator://x?group=AllChecks&messageInterpolator=#myMessageInterpolator &traversableResolver=#myTraversableResolver&constraintValidatorFactory=#myConstraintValidatorFactory\") .to(\"mock:end\")",
"<validation-config xmlns=\"http://jboss.org/xml/ns/javax/validation/configuration\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://jboss.org/xml/ns/javax/validation/configuration\"> <default-provider>org.hibernate.validator.HibernateValidator</default-provider> <message-interpolator>org.hibernate.validator.engine.ResourceBundleMessageInterpolator</message-interpolator> <traversable-resolver>org.hibernate.validator.engine.resolver.DefaultTraversableResolver</traversable-resolver> <constraint-validator-factory>org.hibernate.validator.engine.ConstraintValidatorFactoryImpl</constraint-validator-factory> <constraint-mapping>/constraints-car.xml</constraint-mapping> </validation-config>",
"<constraint-mappings xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://jboss.org/xml/ns/javax/validation/mapping validation-mapping-1.0.xsd\" xmlns=\"http://jboss.org/xml/ns/javax/validation/mapping\"> <default-package>org.apache.camel.component.bean.validator</default-package> <bean class=\"CarWithoutAnnotations\" ignore-annotations=\"true\"> <field name=\"manufacturer\"> <constraint annotation=\"javax.validation.constraints.NotNull\" /> </field> <field name=\"licensePlate\"> <constraint annotation=\"javax.validation.constraints.NotNull\" /> <constraint annotation=\"javax.validation.constraints.Size\"> <groups> <value>org.apache.camel.component.bean.validator.OptionalChecks</value> </groups> <element name=\"min\">5</element> <element name=\"max\">14</element> </constraint> </field> </bean> </constraint-mappings>",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <to uri=\"bean-validator://x?group=org.apache.camel.component.bean.validator.OrderedChecks\"/> </route> </camelContext> </beans>",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-bean-validator-starter</artifactId> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-bean-validator-component-starter |
Chapter 5. address | Chapter 5. address This chapter describes the commands under the address command. 5.1. address scope create Create a new Address Scope Usage: Table 5.1. Positional Arguments Value Summary <name> New address scope name Table 5.2. Optional Arguments Value Summary -h, --help Show this help message and exit --ip-version {4,6} Ip version (default is 4) --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --share Share the address scope between projects --no-share Do not share the address scope between projects (default) Table 5.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 5.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 5.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 5.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 5.2. address scope delete Delete address scope(s) Usage: Table 5.7. Positional Arguments Value Summary <address-scope> Address scope(s) to delete (name or id) Table 5.8. Optional Arguments Value Summary -h, --help Show this help message and exit 5.3. address scope list List address scopes Usage: Table 5.9. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> List only address scopes of given name in output --ip-version <ip-version> List address scopes of given ip version networks (4 or 6) --project <project> List address scopes according to their project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --share List address scopes shared between projects --no-share List address scopes not shared between projects Table 5.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 5.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 5.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 5.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 5.4. address scope set Set address scope properties Usage: Table 5.14. Positional Arguments Value Summary <address-scope> Address scope to modify (name or id) Table 5.15. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> Set address scope name --share Share the address scope between projects --no-share Do not share the address scope between projects 5.5. address scope show Display address scope details Usage: Table 5.16. Positional Arguments Value Summary <address-scope> Address scope to display (name or id) Table 5.17. Optional Arguments Value Summary -h, --help Show this help message and exit Table 5.18. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 5.19. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 5.20. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 5.21. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack address scope create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--ip-version {4,6}] [--project <project>] [--project-domain <project-domain>] [--share | --no-share] <name>",
"openstack address scope delete [-h] <address-scope> [<address-scope> ...]",
"openstack address scope list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--name <name>] [--ip-version <ip-version>] [--project <project>] [--project-domain <project-domain>] [--share | --no-share]",
"openstack address scope set [-h] [--name <name>] [--share | --no-share] <address-scope>",
"openstack address scope show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <address-scope>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/address |
Chapter 18. Disabling Breakpoints in a Running Context | Chapter 18. Disabling Breakpoints in a Running Context Overview You can disable and re-enable breakpoints in a running routing context in the Breakpoints view. When a breakpoint is disabled, the button causes the debugger to skip over it during the debugging session. Disabling and enabling breakpoints in Breakpoints view The Breakpoints view opens with all set breakpoints enabled. To disable a breakpoint, clear its check box. For each breakpoint you disable, the Console view displays an INFO level log entry noting that it has been disabled (for example, Removing breakpoint log2 ). Likewise, for each breakpoint you re-enable, the Console view displays an INFO level log entry noting that it has been enabled (for example, Adding breakpoint log2 ). Note To re-enable a disabled breakpoint, click its check box. The Console view displays an INFO level log entry noting that the breakpoint has been added to the selected node. | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/disableBreakpoints |
Chapter 12. Build configuration resources | Chapter 12. Build configuration resources Use the following procedure to configure build settings. 12.1. Build controller configuration parameters The build.config.openshift.io/cluster resource offers the following configuration parameters. Parameter Description Build Holds cluster-wide information on how to handle builds. The canonical, and only valid name is cluster . spec : Holds user-settable values for the build controller configuration. buildDefaults Controls the default information for builds. defaultProxy : Contains the default proxy settings for all build operations, including image pull or push and source download. You can override values by setting the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables in the BuildConfig strategy. gitProxy : Contains the proxy settings for Git operations only. If set, this overrides any proxy settings for all Git commands, such as git clone . Values that are not set here are inherited from DefaultProxy. env : A set of default environment variables that are applied to the build if the specified variables do not exist on the build. imageLabels : A list of labels that are applied to the resulting image. You can override a default label by providing a label with the same name in the BuildConfig . resources : Defines resource requirements to execute the build. ImageLabel name : Defines the name of the label. It must have non-zero length. buildOverrides Controls override settings for builds. imageLabels : A list of labels that are applied to the resulting image. If you provided a label in the BuildConfig with the same name as one in this table, your label will be overwritten. nodeSelector : A selector which must be true for the build pod to fit on a node. tolerations : A list of tolerations that overrides any existing tolerations set on a build pod. BuildList items : Standard object's metadata. 12.2. Configuring build settings You can configure build settings by editing the build.config.openshift.io/cluster resource. Procedure Edit the build.config.openshift.io/cluster resource by entering the following command: USD oc edit build.config.openshift.io/cluster The following is an example build.config.openshift.io/cluster resource: apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 2 name: cluster resourceVersion: "107233" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists 1 Build : Holds cluster-wide information on how to handle builds. The canonical, and only valid name is cluster . 2 buildDefaults : Controls the default information for builds. 3 defaultProxy : Contains the default proxy settings for all build operations, including image pull or push and source download. 4 env : A set of default environment variables that are applied to the build if the specified variables do not exist on the build. 5 gitProxy : Contains the proxy settings for Git operations only. If set, this overrides any Proxy settings for all Git commands, such as git clone . 6 imageLabels : A list of labels that are applied to the resulting image. You can override a default label by providing a label with the same name in the BuildConfig . 7 resources : Defines resource requirements to execute the build. 8 buildOverrides : Controls override settings for builds. 9 imageLabels : A list of labels that are applied to the resulting image. If you provided a label in the BuildConfig with the same name as one in this table, your label will be overwritten. 10 nodeSelector : A selector which must be true for the build pod to fit on a node. 11 tolerations : A list of tolerations that overrides any existing tolerations set on a build pod. | [
"oc edit build.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 2 name: cluster resourceVersion: \"107233\" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/builds_using_buildconfig/build-configuration |
Overview of Red Hat Enterprise Linux for SAP Solutions Subscription | Overview of Red Hat Enterprise Linux for SAP Solutions Subscription Red Hat Enterprise Linux for SAP Solutions 8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/overview_of_red_hat_enterprise_linux_for_sap_solutions_subscription/index |
Chapter 2. Understanding authentication | Chapter 2. Understanding authentication For users to interact with OpenShift Container Platform, they must first authenticate to the cluster. The authentication layer identifies the user associated with requests to the OpenShift Container Platform API. The authorization layer then uses information about the requesting user to determine if the request is allowed. As an administrator, you can configure authentication for OpenShift Container Platform. 2.1. Users A user in OpenShift Container Platform is an entity that can make requests to the OpenShift Container Platform API. An OpenShift Container Platform User object represents an actor which can be granted permissions in the system by adding roles to them or to their groups. Typically, this represents the account of a developer or administrator that is interacting with OpenShift Container Platform. Several types of users can exist: User type Description Regular users This is the way most interactive OpenShift Container Platform users are represented. Regular users are created automatically in the system upon first login or can be created via the API. Regular users are represented with the User object. Examples: joe alice System users Many of these are created automatically when the infrastructure is defined, mainly for the purpose of enabling the infrastructure to interact with the API securely. They include a cluster administrator (with access to everything), a per-node user, users for use by routers and registries, and various others. Finally, there is an anonymous system user that is used by default for unauthenticated requests. Examples: system:admin system:openshift-registry system:node:node1.example.com Service accounts These are special system users associated with projects; some are created automatically when the project is first created, while project administrators can create more for the purpose of defining access to the contents of each project. Service accounts are represented with the ServiceAccount object. Examples: system:serviceaccount:default:deployer system:serviceaccount:foo:builder Each user must authenticate in some way to access OpenShift Container Platform. API requests with no authentication or invalid authentication are authenticated as requests by the anonymous system user. After authentication, policy determines what the user is authorized to do. 2.2. Groups A user can be assigned to one or more groups , each of which represent a certain set of users. Groups are useful when managing authorization policies to grant permissions to multiple users at once, for example allowing access to objects within a project, versus granting them to users individually. In addition to explicitly defined groups, there are also system groups, or virtual groups , that are automatically provisioned by the cluster. The following default virtual groups are most important: Virtual group Description system:authenticated Automatically associated with all authenticated users. system:authenticated:oauth Automatically associated with all users authenticated with an OAuth access token. system:unauthenticated Automatically associated with all unauthenticated users. 2.3. API authentication Requests to the OpenShift Container Platform API are authenticated using the following methods: OAuth access tokens Obtained from the OpenShift Container Platform OAuth server using the <namespace_route> /oauth/authorize and <namespace_route> /oauth/token endpoints. Sent as an Authorization: Bearer... header. Sent as a websocket subprotocol header in the form base64url.bearer.authorization.k8s.io.<base64url-encoded-token> for websocket requests. X.509 client certificates Requires an HTTPS connection to the API server. Verified by the API server against a trusted certificate authority bundle. The API server creates and distributes certificates to controllers to authenticate themselves. Any request with an invalid access token or an invalid certificate is rejected by the authentication layer with a 401 error. If no access token or certificate is presented, the authentication layer assigns the system:anonymous virtual user and the system:unauthenticated virtual group to the request. This allows the authorization layer to determine which requests, if any, an anonymous user is allowed to make. 2.3.1. OpenShift Container Platform OAuth server The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API. When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request. It then determines what user that identity maps to, creates an access token for that user, and returns the token for use. 2.3.1.1. OAuth token requests Every request for an OAuth token must specify the OAuth client that will receive and use the token. The following OAuth clients are automatically created when starting the OpenShift Container Platform API: OAuth client Usage openshift-browser-client Requests tokens at <namespace_route>/oauth/token/request with a user-agent that can handle interactive logins. [1] openshift-challenging-client Requests tokens with a user-agent that can handle WWW-Authenticate challenges. <namespace_route> refers to the namespace route. This is found by running the following command: USD oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host All requests for OAuth tokens involve a request to <namespace_route>/oauth/authorize . Most authentication integrations place an authenticating proxy in front of this endpoint, or configure OpenShift Container Platform to validate credentials against a backing identity provider. Requests to <namespace_route>/oauth/authorize can come from user-agents that cannot display interactive login pages, such as the CLI. Therefore, OpenShift Container Platform supports authenticating using a WWW-Authenticate challenge in addition to interactive login flows. If an authenticating proxy is placed in front of the <namespace_route>/oauth/authorize endpoint, it sends unauthenticated, non-browser user-agents WWW-Authenticate challenges rather than displaying an interactive login page or redirecting to an interactive login flow. Note To prevent cross-site request forgery (CSRF) attacks against browser clients, only send Basic authentication challenges with if a X-CSRF-Token header is on the request. Clients that expect to receive Basic WWW-Authenticate challenges must set this header to a non-empty value. If the authenticating proxy cannot support WWW-Authenticate challenges, or if OpenShift Container Platform is configured to use an identity provider that does not support WWW-Authenticate challenges, you must use a browser to manually obtain a token from <namespace_route>/oauth/token/request . 2.3.1.2. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 2.3.1.3. Authentication metrics for Prometheus OpenShift Container Platform captures the following Prometheus system metrics during authentication attempts: openshift_auth_basic_password_count counts the number of oc login user name and password attempts. openshift_auth_basic_password_count_result counts the number of oc login user name and password attempts by result, success or error . openshift_auth_form_password_count counts the number of web console login attempts. openshift_auth_form_password_count_result counts the number of web console login attempts by result, success or error . openshift_auth_password_total counts the total number of oc login and web console login attempts. | [
"oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authentication_and_authorization/understanding-authentication |
Chapter 8. Preparing a remote installation by using VNC | Chapter 8. Preparing a remote installation by using VNC 8.1. Overview The graphical user interface is the recommended method of installing RHEL when you boot the system from a CD, DVD, or USB flash drive, or from a network using PXE. However, many enterprise systems, for example, IBM Power Systems and 64-bit IBM Z, are located in remote data center environments that are run autonomously and are not connected to a display, keyboard, and mouse. These systems are often referred to as headless systems and they are typically controlled over a network connection. The RHEL installation program includes a Virtual Network Computing (VNC) installation that runs the graphical installation on the target machine, but control of the graphical installation is handled by another system on the network. The RHEL installation program offers two VNC installation modes: Direct and Connect . Once a connection is established, the two modes do not differ. The mode you select depends on your environment. Direct mode In Direct mode, the RHEL installation program is configured to start on the target system and wait for a VNC viewer that is installed on another system before proceeding. As part of the Direct mode installation, the IP address and port are displayed on the target system. You can use the VNC viewer to connect to the target system remotely using the IP address and port, and complete the graphical installation. Connect mode In Connect mode, the VNC viewer is started on a remote system in listening mode. The VNC viewer waits for an incoming connection from the target system on a specified port. When the RHEL installation program starts on the target system, the system host name and port number are provided by using a boot option or a Kickstart command. The installation program then establishes a connection with the listening VNC viewer using the specified system host name and port number. To use Connect mode, the system with the listening VNC viewer must be able to accept incoming network connections. 8.2. Considerations By default, the installation program has a VNC server included. Consider the following items when performing a remote RHEL installation using VNC: VNC client application: A VNC client application is required to perform both a VNC Direct and Connect installation. VNC client applications are available in the repositories of most Linux distributions, and free VNC client applications are also available for other operating systems such as Windows. The following VNC client applications are available in RHEL: tigervnc is independent of your desktop environment and is installed as part of the tigervnc package. vinagre is part of the GNOME desktop environment and is installed as part of the vinagre package. Network and firewall: If the target system is not allowed inbound connections by a firewall, then you must use Connect mode or disable the firewall. Disabling a firewall can have security implications. If the system that is running the VNC viewer is not allowed incoming connections by a firewall, then you must use Direct mode, or disable the firewall. Disabling a firewall can have security implications. See the Security hardening document for more information about configuring the firewall. Custom Boot Options: You must specify custom boot options to start a VNC installation and the installation instructions might differ depending on your system architecture. VNC in Kickstart installations: You can use VNC-specific commands in Kickstart installations. Using only the vnc command runs a RHEL installation in Direct mode. Additional options are available to set up an installation using Connect mode. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/preparing-a-remote-installation-by-using-vnc_rhel-installer |
Scaling storage | Scaling storage Red Hat OpenShift Data Foundation 4.17 Instructions for scaling operations in OpenShift Data Foundation Red Hat Storage Documentation Team Abstract This document explains scaling options for Red Hat OpenShift Data Foundation. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Chapter 1. Introduction to scaling storage Red Hat OpenShift Data Foundation is a highly scalable storage system. OpenShift Data Foundation allows you to scale by adding the disks in the multiple of three, or three or any number depending upon the deployment type. For internal (dynamic provisioning) deployment mode, you can increase the capacity by adding 3 disks at a time. For internal-attached (Local Storage Operator based) mode, you can deploy with less than 3 failure domains. With flexible scale deployment enabled, you can scale up by adding any number of disks. For deployment with 3 failure domains, you will be able to scale up by adding disks in the multiple of 3. For scaling your storage in external mode, see Red Hat Ceph Storage documentation . Note You can use a maximum of nine storage devices per node. The high number of storage devices will lead to a higher recovery time during the loss of a node. This recommendation ensures that nodes stay below the cloud provider dynamic storage device attachment limits, and limits the recovery time after node failure with local storage devices. While scaling, you must ensure that there are enough CPU and Memory resources as per scaling requirement. Supported storage classes by default gp2-csi on AWS thin on VMware managed_premium on Microsoft Azure 1.1. Supported Deployments for Red Hat OpenShift Data Foundation User-provisioned infrastructure: Amazon Web Services (AWS) VMware Bare metal IBM Power IBM Z or IBM(R) LinuxONE Installer-provisioned infrastructure: Amazon Web Services (AWS) Microsoft Azure VMware Bare metal Chapter 2. Requirements for scaling storage Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Data Foundation instance: Platform requirements Resource requirements Storage device requirements Dynamic storage devices Local storage devices Capacity planning Important Always ensure that you have plenty of storage capacity. If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space completely. Full storage is very difficult to recover. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. If you do run out of storage space completely, contact Red Hat Customer Support . Chapter 3. Scaling storage capacity of AWS OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on AWS cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 3.1. Scaling up storage capacity on a cluster To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 3.2. Scaling out storage capacity on a AWS cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 3.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 3.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute -> Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute -> Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 3.2.1.2. Adding a node to an user-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute -> Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 3.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster . Chapter 4. Scaling storage of bare metal OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on your bare metal cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 4.1. Scaling up a cluster created using local storage devices To scale up an OpenShift Data Foundation cluster which was created using local storage devices, you need to add a new disk to the storage node. The new disks size must be of the same size as the disks used during the deployment because OpenShift Data Foundation does not support heterogeneous disks/OSDs. For deployments having three failure domains, you can scale up the storage by adding disks in the multiples of three, with the same number of disks coming from nodes in each of the failure domains. For example, if we scale by adding six disks, two disks are taken from nodes in each of the three failure domains. If the number of disks is not in multiples of three, it will only consume the disk to the maximum in the multiple of three while the remaining disks remain unused. For deployments having less than three failure domains, there is a flexibility to add any number of disks. Make sure to verify that flexible scaling is enabled. For information, refer to the Knowledgebase article Verify if flexible scaling is enabled . Note Flexible scaling features get enabled at the time of deployment and cannot be enabled or disabled later on. Prerequisites Administrative privilege to the OpenShift Container Platform Console. A running OpenShift Data Foundation Storage Cluster. Make sure that the disks to be used for scaling are attached to the storage node Make sure that LocalVolumeDiscovery and LocalVolumeSet objects are created. Procedure To add capacity, you can either use a storage class that you provisioned during the deployment or any other storage class that matches the filter. In the OpenShift Web Console, click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class for which you added disks or the new storage class depending on your requirement. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 4.2. Scaling out storage capacity on a bare metal cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. There is no limit on the number of nodes which can be added. Howerver, from the technical support perspective, 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 4.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 4.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute -> Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute -> Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 4.2.1.2. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute -> Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators -> Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) -> Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) -> Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 4.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage by adding capacity . Chapter 5. Scaling storage of VMware OpenShift Data Foundation cluster 5.1. Scaling up storage on a VMware cluster To increase the storage capacity in a dynamically created storage cluster on a VMware user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites Administrative privilege to the OpenShift Container Platform Console. A running OpenShift Data Foundation Storage Cluster. Make sure that the disk is of the same size and type as the disk used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 5.2. Scaling up a cluster created using local storage devices To scale up an OpenShift Data Foundation cluster which was created using local storage devices, you need to add a new disk to the storage node. The new disks size must be of the same size as the disks used during the deployment because OpenShift Data Foundation does not support heterogeneous disks/OSDs. For deployments having three failure domains, you can scale up the storage by adding disks in the multiples of three, with the same number of disks coming from nodes in each of the failure domains. For example, if we scale by adding six disks, two disks are taken from nodes in each of the three failure domains. If the number of disks is not in multiples of three, it will only consume the disk to the maximum in the multiple of three while the remaining disks remain unused. For deployments having less than three failure domains, there is a flexibility to add any number of disks. Make sure to verify that flexible scaling is enabled. For information, refer to the Knowledgebase article Verify if flexible scaling is enabled . Note Flexible scaling features get enabled at the time of deployment and cannot be enabled or disabled later on. Prerequisites Administrative privilege to the OpenShift Container Platform Console. A running OpenShift Data Foundation Storage Cluster. Make sure that the disks to be used for scaling are attached to the storage node Make sure that LocalVolumeDiscovery and LocalVolumeSet objects are created. Procedure To add capacity, you can either use a storage class that you provisioned during the deployment or any other storage class that matches the filter. In the OpenShift Web Console, click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class for which you added disks or the new storage class depending on your requirement. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 5.3. Scaling out storage capacity on a VMware cluster 5.3.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute -> Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute -> Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 5.3.2. Adding a node to an user-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute -> Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 5.3.3. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute -> Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators -> Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) -> Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) -> Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 5.3.4. Scaling up storage capacity To scale up storage capacity: For dynamic storage devices, see Scaling up storage capacity on a cluster . For local storage devices, see Scaling up a cluster created using local storage devices Chapter 6. Scaling storage of Microsoft Azure OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on Microsoft Azure cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 6.1. Scaling up storage capacity on a cluster To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 6.2. Scaling out storage capacity on a Microsoft Azure cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 6.2.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute -> Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute -> Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 6.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster . Chapter 7. Scaling storage capacity of GCP OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on GCP cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 7.1. Scaling up storage capacity on a cluster To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 7.2. Scaling out storage capacity on a GCP cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 7.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 7.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute -> Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute -> Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 7.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster . Chapter 8. Scaling storage of IBM Z or IBM LinuxONE OpenShift Data Foundation cluster 8.1. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on IBM Z or IBM LinuxONE infrastructure You can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Note Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. Prerequisites A running OpenShift Data Foundation Platform. Administrative privileges on the OpenShift Web Console. To scale using a storage class other than the one provisioned during deployment, first define an additional storage class. See Creating storage classes and pools for details. Procedure Add additional hardware resources with zFCP disks. List all the disks. Example output: A SCSI disk is represented as a zfcp-lun with the structure <device-id>:<wwpn>:<lun-id> in the ID section. The first disk is used for the operating system. The device id for the new disk can be the same. Append a new SCSI disk. Note The device ID for the new disk must be the same as the disk to be replaced. The new disk is identified with its WWPN and LUN ID. List all the FCP devices to verify the new disk is configured. Navigate to the OpenShift Web Console. Click Operators on the left navigation bar. Select Installed Operators . In the window, click OpenShift Data Foundation Operator. In the top navigation bar, scroll right and click Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. The Raw Capacity field shows the size set during storage class creation. The total amount of storage consumed is three times this amount, because OpenShift Data Foundation uses a replica count of 3. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 8.2. Scaling out storage capacity on a IBM Z or IBM LinuxONE cluster 8.2.1. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute -> Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators -> Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) -> Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) -> Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 8.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster . Chapter 9. Scaling storage of IBM Power OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on IBM Power cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 9.1. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on IBM Power infrastructure using local storage devices In order to scale up an OpenShift Data Foundation cluster which was created using local storage devices, a new disk needs to be added to the storage node. It is recommended to have the new disks of the same size as used earlier during the deployment as OpenShift Data Foundation does not support heterogeneous disks/OSD's. You can add storage capacity (additional storage devices) to your configured local storage based OpenShift Data Foundation worker nodes on IBM Power infrastructures. Note Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. Prerequisites You must be logged into the OpenShift Container Platform cluster. You must have installed the local storage operator. Use the following procedure: Installing Local Storage Operator on IBM Power You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 0.5TB SSD) as the original OpenShift Data Foundation StorageCluster was created with. Procedure To add storage capacity to OpenShift Container Platform nodes with OpenShift Data Foundation installed, you need to Find the available devices that you want to add, that is, a minimum of one device per worker node. You can follow the procedure for finding available storage devices in the respective deployment guide. Note Make sure you perform this process for all the existing nodes (minimum of 3) for which you want to add storage. Add the additional disks to the LocalVolume custom resource (CR). Example output: Make sure to save the changes after editing the CR. Example output: You can see in this CR that new devices are added. sdx Display the newly created Persistent Volumes (PVs) with the storageclass name used in the localVolume CR. Example output: Navigate to the OpenShift Web Console. Click Operators on the left navigation bar. Select Installed Operators . In the window, click OpenShift Data Foundation Operator. In the top navigation bar, scroll right and click Storage System tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. From this dialog box, set the Storage Class name to the name used in the localVolume CR. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the available Capacity. In the OpenShift Web Console, click Storage -> Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . Navigate to Overview -> Block and File tab, then check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 9.2. Scaling out storage capacity on a IBM Power cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps: Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 9.2.1. Adding a node using a local storage device on IBM Power You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You must be logged into the OpenShift Container Platform cluster. You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 2TB SSD drive) as the original OpenShift Data Foundation StorageCluster was created with. Procedure Get a new IBM Power machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform node using the new IBM Power machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute -> Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators -> Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume tab. Beside the LocalVolume , click Action menu (...) -> Edit Local Volume . In the YAML, add the hostname of the new node in the values field under the node selector . Figure 9.1. YAML showing the addition of new hostnames Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 9.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster . | [
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"lszdev",
"TYPE ID ON PERS NAMES zfcp-host 0.0.8204 yes yes zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500407630c0b50a4:0x3002b03000000000 yes yes sdb sg1 qeth 0.0.bdd0:0.0.bdd1:0.0.bdd2 yes no encbdd0 generic-ccw 0.0.0009 yes no",
"chzdev -e 0.0.8204:0x400506630b1b50a4:0x3001301a00000000",
"lszdev zfcp-lun TYPE ID ON PERS NAMES zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500507630b1b50a4:0x4001302a00000000 yes yes sdb sg1 zfcp-lun 0.0.8204:0x400506630b1b50a4:0x3001301a00000000 yes yes sdc sg2",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc edit -n openshift-local-storage localvolume localblock",
"spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda - /dev/sdx # newly added device storageClassName: localblock volumeMode: Block",
"localvolume.local.storage.openshift.io/localblock edited",
"oc get pv | grep localblock | grep Available",
"local-pv-a04ffd8 500Gi RWO Delete Available localblock 24s local-pv-a0ca996b 500Gi RWO Delete Available localblock 23s local-pv-c171754a 500Gi RWO Delete Available localblock 23s",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/scaling_storage/proc_scaling-up-storage-by-adding-capacity-to-your-openshift-data-foundation-nodes-on-ibmz-infrastructure_ibmz |
Chapter 7. roxctl CLI command reference | Chapter 7. roxctl CLI command reference 7.1. roxctl Display the available commands and optional parameters for roxctl CLI. You must have an account with administrator privileges to use these commands. Usage USD roxctl [command] [flags] Table 7.1. Available commands Command Description central Commands related to the Central service. cluster Commands related to a cluster. collector Commands related to the Collector service. completion Generate shell completion scripts. declarative-config Manage declarative configuration. deployment Commands related to deployments. helm Commands related to Red Hat Advanced Cluster Security for Kubernetes (RHACS) Helm Charts. image Commands that you can run on a specific image. netpol Commands related to network policies. scanner Commands related to the Scanner service. sensor Deploy RHACS services in secured clusters. version Display the current roxctl version. 7.1.1. roxctl command options The roxctl command supports the following options: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. 7.2. roxctl central Commands related to the Central service. Usage USD roxctl central [command] [flags] Table 7.2. Available commands Command Description backup Create a backup of the Red Hat Advanced Cluster Security for Kubernetes (RHACS) database and the certificates. cert Download the certificate chain for the Central service. crs Generate a cluster registration secret (CRS) that allows communication between Central and secured clusters for the initial setup, to retrieve a list of CRSes, or to revoke a CRS. db Control the database operations. debug Debug the Central service. generate Generate the required YAML configuration files containing the orchestrator objects for the deployment of Central. init-bundles Initialize bundles for Central. login Log in to the Central instance to obtain a token. userpki Manage the user certificate authorization providers. whoami Display information about the current user and their authentication method. 7.2.1. roxctl central command options inherited from the parent command The roxctl central command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl central command. 7.2.2. roxctl central backup Create a backup of the RHACS database and certificates. Usage USD roxctl central backup [flags] Table 7.3. Options Option Description --certs-only Specify to only back up the certificates. When using an external database, this option is used to generate a backup bundle with certificates. The default value is false . --output string Specify where you want to save the backup. The behavior depends on the specified path: If the path is a file path, the backup is written to the file and overwrites it if it already exists. The directory must exist. If the path is a directory, the backup is saved in this directory under the file name that the server specifies. If this argument is omitted, the backup is saved in the current working directory under the file name that the server specifies. -t , --timeout duration Specify the timeout for API requests. It represents the maximum duration of a request. The default value is 1h0m0s . 7.2.3. roxctl central cert Download the certificate chain for the Central service. Usage USD roxctl central cert [flags] Table 7.4. Options Option Description --output string Specify the file name to which you want to save the PEM certificate. Use - to produce output to the standard output stream ( stdout ). The default value is - . --retry-timeout duration Specify the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 7.2.4. roxctl central crs Manage cluster registration secrets that allow communication between Central and secured clusters for the initial setup. Important Cluster registration secrets is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Usage USD roxctl central crs [command] [flags] For available flags, see "roxctl central command options inherited from the parent command". 7.2.4.1. roxctl central crs generate Generate a cluster registration secret (CRS) that allows communication between Central and secured clusters for the initial setup. Usage USD roxctl central crs generate <crs name> [flags] For available flags, see "roxctl central command options inherited from the parent command". 7.2.4.2. roxctl central crs list Generate a list of previously generated cluster registration secrets that allow communication between Central and secured clusters for the initial setup. Usage USD roxctl central crs list [flags] For available flags, see "roxctl central command options inherited from the parent command". 7.2.4.3. roxctl central crs revoke Revoke a cluster registration secret (CRS) that allows communication between Central and secured clusters for the initial setup. Usage USD roxctl central crs revoke <CRS unique identifier or name> [flags] For available flags, see "roxctl central command options inherited from the parent command". 7.2.5. roxctl central login Login to the Central instance to obtain a token. Usage USD roxctl central login [flags] Table 7.5. Options Option Description -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 5m0s . 7.2.6. roxctl central whoami Display information about the current user and their authentication method. Usage USD roxctl central whoami [flags] Table 7.6. Options Option Description --retry-timeout duration Specify the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 7.2.7. roxctl central db Control the database operations. Usage USD roxctl central db [flags] Table 7.7. Options Option Description -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1h0m0s . 7.2.7.1. roxctl central db restore Restore the RHACS database from a backup. Usage USD roxctl central db restore <file> [flags] 1 1 For <file> , specify the database backup file that you want to restore. Table 7.8. Options Option Description -f , --force If set to true , the restoration is performed without confirmation. The default value is false . --interrupt If set to true , it interrupts the running restore process to allow it to continue. The default value is false . 7.2.7.2. roxctl central db generate Generate a Central database bundle. Usage USD roxctl central db generate [flags] Table 7.9. Options Option Description --debug If set to true , templates are read from the local file system. The default value is false . --debug-path string Specify the path to the Helm templates in your local file system. For more details, run the roxctl central db generate command. --enable-pod-security-policies If set to true , PodSecurityPolicy resources are created. The default value is true . 7.2.7.3. roxctl central db generate k8s Generate Kubernetes YAML files for deploying Central's database components. Usage USD roxctl central db generate k8s [flags] Table 7.10. Options Option Description --central-db-image string Specify the Central database image that you want to use. If not specified, a default value corresponding to the --image-defaults is used. --image-defaults string Specify the default settings for container images. It controls the repositories from which the images are downloaded, the image names and the format of the tags. The default value is development_build . --output-dir output directory Specify the directory to which you want to save the deployment bundle. The default value is central-db-bundle . 7.2.7.4. roxctl central db restore cancel Cancel the ongoing Central database restore process. Usage USD roxctl central db restore cancel [flags] Table 7.11. Options Option Description f , --force If set to true , proceed with the cancellation without confirmation. The default value is false . 7.2.7.5. roxctl central db restore status Display information about the ongoing database restore process. Usage USD roxctl central db restore status [flags] 7.2.7.6. roxctl central db generate k8s pvc Generate Kubernetes YAML files for persistent volume claims (PVCs) in Central. Usage USD roxctl central db generate k8s pvc [flags] Table 7.12. Options Option Description --name string Specify the external volume name for the Central database. The default value is central-db . --size uint32 Specify the external volume size in gigabytes for the Central database. The default value is 100 . --storage-class string Specify the storage class name for the Central database. This is optional if you have a default storage class configured. 7.2.7.7. roxctl central db generate openshift Generate an OpenShift YAML manifest for deploying a Central database instance on a Red Hat OpenShift cluster. Usage USD roxctl central db generate openshift [flags] Table 7.13. Options Option Description --central-db-image string Specify the Central database image that you want to use. If not specified, a default value corresponding to the --image-defaults is used. --image-defaults string Specify the default settings for container images. It controls the repositories from which the images are downloaded, the image names and the format of the tags. The default value is development_build . --openshift-version int Specify the Red Hat OpenShift major version 3 or 4 for the deployment. The default value is 3 . --output-dir output-directory Specify the directory to which you want to save the deployment bundle. The default value is central-db-bundle . 7.2.7.8. roxctl central db generate k8s hostpath Generate a Kubernetes YAML manifest for a database deployment with a hostpath volume type in Central. Usage USD roxctl central db generate k8s hostpath [flags] Table 7.14. Options Option Description --hostpath string Specify the path on the host. The default value is /var/lib/stackrox-central-db . --node-selector-key string Specify the node selector key. Valid values include kubernetes.io and hostname . --node-selector-value string Specify the node selector value. 7.2.7.9. roxctl central db generate openshift pvc Generate an OpenShift YAML manifest for a database deployment with a persistent volume claim (PVC) in Central. Usage USD roxctl central db generate openshift pvc [flags] Table 7.15. Options Option Description --name string Specify the external volume name for the Central database. The default value is central-db . --size uint32 Specify the external volume size in gigabytes for the Central database. The default value is 100 . --storage-class string Specify the storage class name for the Central database. This is optional if you have a default storage class configured. 7.2.7.10. roxctl central db generate openshift hostpath Add a hostpath external volume to the Central database. Usage USD roxctl central db generate openshift hostpath [flags] Table 7.16. Options Option Description --hostpath string Specify the path on the host. The default value is /var/lib/stackrox-central-db . --node-selector-key string Specify the node selector key. Valid values include kubernetes.io and hostname . --node-selector-value string Specify the node selector value. 7.2.8. roxctl central debug Debug the Central service. Usage USD roxctl central debug [flags] 7.2.8.1. roxctl central debug db Control the debugging of the database. Usage USD roxctl central debug db [flags] Table 7.17. Options Option Description -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 7.2.8.2. roxctl central debug log Retrieve the current log level. Usage USD roxctl central debug log [flags] Table 7.18. Options Option Description -l , --level string Specify the log level to which you want to set the modules. Valid values include Debug , Info , Warn , Error , Panic , and Fatal . -m , --modules strings Specify the modules to which you want to apply the command. --retry-timeout duration Specify the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Specify the timeout for API requests, which is the maximum duration of a request. The default value is 1m0s . 7.2.8.3. roxctl central debug dump Download a bundle containing the debug information for Central. Usage USD roxctl central debug dump [flags] Table 7.19. Options Option Description --logs If set to true , logs are included in the Central dump. The default value is false . --output-dir string Specify the output directory for the bundle content. The default value is an automatically generated directory name within the current directory. -t , --timeout duration Specify the timeout for API requests, which is the maximum duration of a request. The default value is 5m0s . 7.2.8.4. roxctl central debug db stats Control the statistics of the Central database. Usage USD roxctl central debug db stats [flags] 7.2.8.5. roxctl central debug authz-trace Enable or disable authorization tracing in Central for debugging purposes. Usage USD roxctl central debug authz-trace [flags] Table 7.20. Options Option Description -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 20m0s . 7.2.8.6. roxctl central debug db stats reset Reset the statistics of the Central database. Usage USD roxctl central debug db stats reset [flags] 7.2.8.7. roxctl central debug download-diagnostics Download a bundle containing a snapshot of diagnostic information about the platform. Usage USD roxctl central debug download-diagnostics [flags] Table 7.21. Options Option Description --clusters strings Specify a comma-separated list of the Sensor clusters from which you want to collect the logs. --output-dir string Specify the output directory in which you want to save the diagnostic bundle. --since string Specify the timestamp from which you want to collect the logs from the Sensor clusters. -t , --timeout duration Specify the timeout for API requests, which specifies the maximum duration of a request. The default value is 5m0s . 7.2.9. roxctl central generate Generate the required YAML configuration files that contain the orchestrator objects to deploy Central. Usage USD roxctl central generate [flags] Table 7.22. Options Option Description --backup-bundle string Specify the path to the backup bundle from which you want to restore the keys and certificates. --debug If set to true , templates are read from the local file system. The default value is false . --debug-path string Specify the path to Helm templates on your local file system. For more details, run the roxctl central generate --help command. --default-tls-certfile Specify the PEM certificate bundle file that you want to use as the default. --default-tls-keyfile Specify the PEM private key file that you want to use as the default. --enable-pod-security-policies If set to true , PodSecurityPolicy resources are created. The default value is true . -p , --password string Specify the administrator password. The default value is automatically generated. --plaintext-endpoints string Specify the ports or endpoints you want to use for unencrypted exposure as a comma-separated list. 7.2.9.1. roxctl central generate k8s Generate the required YAML configuration files to deploy Central into a Kubernetes cluster. Usage USD roxctl central generate k8s [flags] Table 7.23. Options Option Description --central-db-image string Specify the Central database image you want to use. If not specified, a default value corresponding to the --image-defaults is used. --declarative-config-config-maps strings Specify a list of configuration maps that you want to add as declarative configuration mounts in Central. --declarative-config-secrets strings Specify a list of secrets that you want to add as declarative configuration mounts in Central. --enable-telemetry Specify whether you want to enable telemetry. The default value is false . --image-defaults string Specify the default settings for container images. The specified settings control the repositories from which the images are downloaded, the image names and the format of the tags. The default value is development_build . --istio-support version Generate deployment files that support the specified Istio version. Valid values include 1.0 , 1.1 , 1.2 , 1.3 , 1.4 , 1.5 , 1.6 , and 1.7 . --lb-type load balancer type Specify the method in which you want to suspend Central. Valid values include lb , np and none . The default value is none . -i , --main-image string Specify the main image that you want to use. If not specified, a default value corresponding to the --image-defaults is used. --offline Specify whether you want to run RHACS in offline mode, avoiding a connection to the Internet. The default value is false . --output-dir output directory Specify the directory to which you want to save the deployment bundle. The default value is central-bundle . --output-format output format Specify the deployment tool that you want to use. Valid values include kubectl , helm , and helm-values . The default value is kubectl . --scanner-db-image string Specify the Scanner database image that you want to use. If not specified, a default value corresponding to the --image-defaults is used. --scanner-image string Specify the Scanner image that you want to use. If not specified, a default value corresponding to the `--image-defaults" is used. 7.2.9.2. roxctl central generate k8s pvc Generate Kubernetes YAML files for persistent volume claims (PVCs) in Central. Usage USD roxctl central generate k8s pvc [flags] Table 7.24. Options Option Description --db-name string Specify the external volume name for the Central database. The default value is central-db . --db-size uint32 Specify the external volume size in gigabytes for the Central database. The default value is 100 . --db-storage-class string Specify the storage class name for the Central database. This is optional if you have a default storage class configured. 7.2.9.3. roxctl central generate openshift Generate the required YAML configuration files to deploy Central in a Red Hat OpenShift cluster. Usage USD roxctl central generate openshift [flags] Table 7.25. Options Option Description --central-db-image string Specify the Central database image that you want to use. If not specified, a default value is created corresponding to the --image-defaults . --declarative-config-config-maps strings Specify a list of configuration maps that you want to add as declarative configuration mounts in Central. --declarative-config-secrets strings Specify a list of secrets that you want to add as declarative configuration mounts in Central. --enable-telemetry Specify whether you want to enable telemetry. The default value is false . --image-defaults string Specify the default settings for container images. It controls the repositories from which the images are downloaded, the image names and the format of the tags. The default value is development_build . --istio-support version Generate deployment files that support the specified Istio version. Valid values include 1.0 , 1.1 , 1.2 , 1.3 , 1.4 , 1.5 , 1.6 , and 1.7 . --lb-type load balancer type Specify the method of exposing Central. Valid values include route , lb , np and none . The default value is none . -i , --main-image string Specify the main image that you want to use. If not specified, a default value corresponding to --image-defaults is used. --offline Specify whether you want to run RHACS in offline mode, avoiding a connection to the Internet. The default value is false . --openshift-monitoring false|true|auto[=true] Specify integration with Red Hat OpenShift 4 monitoring. The default value is auto . --openshift-version int Specify the Red Hat OpenShift major version 3 or 4 for the deployment. --output-dir output directory Specify the directory to which you want to save the deployment bundle. The default value is central-bundle . --output-format output format Specify the deployment tool that you want to use. Valid values include kubectl , helm and helm-values . The default value is kubectl . --scanner-db-image string Specify the Scanner database image that you want to use. If not specified, a default value corresponding to the --image-defaults is used. --scanner-image string Specify the Scanner image that you want to use. If not specified, a default value corresponding to --image-defaults is used. 7.2.9.4. roxctl central generate interactive Generate interactive resources in Central. Usage USD roxctl central generate interactive [flags] 7.2.9.5. roxctl central generate k8s hostpath Generate a Kubernetes YAML manifest for deploying a Central instance by using the hostpath volume type. Usage USD roxctl central generate k8s hostpath [flags] Table 7.26. Options Option Description --db-hostpath string Specify the path on the host for the Central database. The default value is /var/lib/stackrox-central . --db-node-selector-key string Specify the node selector key for the Central database. Valid values include kubernetes.io and hostname . --db-node-selector-value string Specify the node selector value for the Central database. 7.2.9.6. roxctl central generate openshift pvc Generate a OpenShift YAML manifest for deploying a persistent volume claim (PVC) in Central. Usage USD roxctl central generate openshift pvc [flags] Table 7.27. Options Option Description --db-name string Specify the external volume name for the Central database. The default value is central-db . --db-size uint32 Specify the external volume size in gigabytes for the Central database. The default value is 100 . --db-storage-class string Specify the storage class name for the Central database. This is optional if you have a default storage class configured. 7.2.9.7. roxctl central generate openshift hostpath Add a hostpath external volume to the deployment definition in Red Hat OpenShift. Usage USD roxctl central generate openshift hostpath [flags] Table 7.28. Options Option Description --db-hostpath string Specify the path on the host for the Central database. The default value is /var/lib/stackrox-central . --db-node-selector-key string Specify the node selector key. Valid values include kubernetes.io and hostname for the Central database. --db-node-selector-value string Specify the node selector value for the Central database. 7.2.10. roxctl central init-bundles Initialize bundles in Central. Usage USD roxctl central init-bundles [flag] Table 7.29. Options Option Description --retry-timeout duration Specify the timeout after which API requests are retried. A value of 0s means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 7.2.10.1. roxctl central init-bundles list List the available initialization bundles in Central. Usage USD roxctl central init-bundles list [flags] 7.2.10.2. roxctl central init-bundles revoke Revoke one or more cluster initialization bundles in Central. Usage USD roxctl central init-bundles revoke <init_bundle_ID or name> [<init_bundle_ID or name> ...] [flags] 1 1 For <init_bundle_ID or name> , specify the ID or the name of the initialization bundle that you want to revoke. You can provide multiple IDs or names separated by using spaces. 7.2.10.3. roxctl central init-bundles fetch-ca Fetch the certificate authority (CA) bundle from Central. Usage USD roxctl central init-bundles fetch-ca [flags] Table 7.30. Options Option Description --output string Specify the file that you want to use for storing the CA configuration. 7.2.10.4. roxctl central init-bundles generate Generate a new cluster initialization bundle. Usage USD roxctl central init-bundles generate <init_bundle_name> [flags] 1 1 For <init_bundle_name> , specify the name for the initialization bundle you want to generate. Table 7.31. Options Option Description --output string Specify the file you want to use for storing the newly generated initialization bundle in the Helm configuration form. Use - to produce output to the standard output stream ( stdout ). --output-secrets string Specify the file that you want to use for storing the newly generated initialization bundle in Kubernetes secret form. Use - to produce output to the standard output stream ( stdout ). 7.2.11. roxctl central userpki Manage the user certificate authorization providers. Usage USD roxctl central userpki [flags] 7.2.11.1. roxctl central userpki list Display all the user certificate authentication providers. Usage USD roxctl central userpki list [flags] Table 7.32. Options Option Description -j , --json Enable the JSON output. The default value is false . --retry-timeout duration Specify the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 7.2.11.2. roxctl central userpki create Create a new user certificate authentication provider. Usage USD roxctl central userpki create name [flags] Table 7.33. Options Option Description -c , --cert strings Specify the PEM files of the root CA certificates. You can specify several certificate files. --retry-timeout duration Specify the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -r , --role string Specify the minimum access role for users of this provider. -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 7.2.11.3. roxctl central userpki delete Delete a user certificate authentication provider. Usage USD roxctl central userpki delete id|name [flags] Table 7.34. Options Option Description -f , --force If set to true , proceed with the deletion without confirmation. The default value is false . --retry-timeout duration Specify the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 7.3. roxctl cluster Commands related to a cluster. Usage USD roxctl cluster [command] [flags] Table 7.35. Available commands Command Description delete Remove Sensor from Central. Table 7.36. Options Option Description --retry-timeout duration Set the retry timeout for API requests. A value of zero means the full request duration is awaited without retry. The default value is 20s . -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 7.3.1. roxctl cluster command options inherited from the parent command The roxctl cluster command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl cluster command. 7.3.2. roxctl cluster delete Remove Sensor from Central. Usage USD roxctl cluster delete [flags] Table 7.37. Options Option Description --name string Specify the cluster name to delete. 7.4. roxctl collector Commands related to the Collector service. Usage USD roxctl collector [command] [flags] Table 7.38. Available commands Command Description support-packages Upload support packages for Collector. 7.4.1. roxctl collector command options inherited from the parent command The roxctl collector command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl collector command. 7.4.2. roxctl collector support-packages Upload support packages for Collector. Note Support packages are deprecated and have no effect on secured clusters running version 4.5 or later. Support package uploads only affect secured clusters on version 4.4 and earlier. Usage USD roxctl collector support-packages [flags] 7.4.2.1. roxctl collector support-packages upload Upload files from a Collector support package to Central. Usage USD roxctl collector support-packages upload [flags] Table 7.39. Options Option Description --overwrite Specify whether you want to overwrite existing but different files. The default value is false . --retry-timeout duration Set the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Set the timeout for API requests. This option represents the maximum duration of a request. The default value is 1m0s . 7.5. roxctl completion Generate shell completion scripts. Usage USD roxctl completion [bash|zsh|fish|powershell] Table 7.40. Supported shell types Shell type Description bash Generate a completion script for the Bash shell. zsh Generate a completion script for the Zsh shell. fish Generate a completion script for the Fish shell. powershell Generate a completion script for the PowerShell shell. 7.5.1. roxctl completion command options inherited from the parent command The roxctl completion command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. 7.6. roxctl declarative-config Manage the declarative configuration. Usage USD roxctl declarative-config [command] [flags] Table 7.41. Available commands Command Description create Create declarative configurations. lint Lint an existing declarative configuration YAML file. 7.6.1. roxctl declarative-config command options inherited from the parent command The roxctl declarative-config command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl declarative-config command. 7.6.2. roxctl declarative-config lint Lint an existing declarative configuration YAML file. Usage USD roxctl declarative-config lint [flags] Table 7.42. Options Option Description --config-map string Read the declarative configuration from the --config-map string . If not specified, the configuration is read from the YAML file specified by using the --file flag. -f , --file string File containing the declarative configuration in YAML format. --namespace string Read the declarative configuration from the --namespace string of the configuration map. If not specified, the namespace specified in the current Kubernetes configuration context is used. --secret string Read the declarative configuration from the specified --secret string . If not specified, the configuration is read from the YAML file specified by using the --file flag. 7.6.3. roxctl declarative-config create Create declarative configurations. Usage USD roxctl declarative-config create [flags] Table 7.43. Options Option Description --config-map string Write the declarative configuration YAML in the configuration map. If not specified and the --secret flag is also not specified, the generated YAML is printed in the standard output format. --namespace string Required if you want to write the declarative configuration YAML to a configuration map or secret. If not specified, the default namespace in the current Kubernetes configuration is used. --secret string Write the declarative configuration YAML in the Secret. You must use secrets for sensitive data. If not specified and the --config-map flag is also not specified, the generated YAML is printed in the standard output format. 7.6.3.1. roxctl declarative-config create role Create a declarative configuration for a role. Usage USD roxctl declarative-config create role [flags] Table 7.44. Options Option Description --access-scope string By providing the name, you can specify the referenced access scope. --description string Set a description for the role. --name string Specify the name of the role. --permission-set string By providing the name, you can specify the referenced permission set. 7.6.3.2. roxctl declarative-config create notifier Create a declarative configuration for a notifier. Usage USD roxctl declarative-config create notifier [flags] Table 7.45. Options Option Description --name string Specify the name of the notifier. 7.6.3.3. roxctl declarative-config create access-scope Create a declarative configuration for an access scope. Usage USD roxctl declarative-config create access-scope [flags] Table 7.46. Options Option Description --cluster-label-selector requirement Specify the criteria for creating a label selector based on the cluster's labels. The key-value pairs represent requirements, and you can use this flag multiple times to create a combination of requirements. The default value is [ [ ] ] . For more details, run the roxctl declarative-config create access-scope --help command. --description string Set a description for the access scope. --included included-object Specify a list of clusters and their namespaces that you want to include in the access scope. The default value is [null] . --name string Specify the name of the access scope. --namespace-label-selector requirement Specify the criteria for creating a label selector based on the namespace's labels. Similar to the cluster-label-selector, you can use this flag multiple times for the combination of requirements. For more details, run the roxctl declarative-config create access-scope --help command. 7.6.3.4. roxctl declarative-config create auth-provider Create a declarative configuration for an authentication provider. Usage USD roxctl declarative-config create auth-provider [flags] Table 7.47. Options Option Description --extra-ui-endpoints strings Specify additional user interface (UI) endpoints from which the authentication provider is used. The expected format is <endpoint>:<port> . --groups-key strings Set the keys of the groups that you want to add within the authentication provider. The tuples of key, value and role should have the same length. For more details, run the roxctl declarative-config create auth-provider --help command. --groups-role strings Set the role of the groups that you want to add within the authentication provider. The tuples of key, value and role should have the same length. For more details, run the roxctl declarative-config create auth-provider --help command. --groups-value strings Set the values of the groups that you want to add within the authentication provider. The tuples of key, value and role should have the same length. For more details, run the roxctl declarative-config create auth-provider --help command. --minimum-access-role string Set the minimum access role of the authentication provider. You can leave this field empty if you do not want to configure the minimum access role by using the declarative configuration. --name string Specify the name of the authentication provider. --required-attributes stringToString Set a list of attributes that the authentication provider must return during authentication. The default value is [] . --ui-endpoint string Set the UI endpoint from which the authentication provider is used. This is usually the public endpoint where RHACS is available. The expected format is <endpoint>:<port> . 7.6.3.5. roxctl declarative-config create permission-set Create a declarative configuration for a permission set. Usage USD roxctl declarative-config create permission-set [flags] Table 7.48. Options Option Description --description string Set the description of the permission set. --name string Specify the name of the permission set. --resource-with-access stringToString Set a list of resources with their respective access levels. The default value is [] . For more details, run the roxctl declarative-config create permission-set --help command. 7.6.3.6. roxctl declarative-config create notifier splunk Create a declarative configuration for a splunk notifier. Usage USD roxctl declarative-config create notifier splunk [flags] Table 7.49. Options Option Description --audit-logging Enable audit logging. The default value is false . --source-types stringToString Specify Splunk source types as comma-separated key=value pairs. The default value is [] . --splunk-endpoint string Specify the Splunk HTTP endpoint. This is a mandatory option. --splunk-skip-tls-verify Use an insecure connection to Splunk. The default value is false . --splunk-token string Specify the Splunk HTTP token. This is a mandatory option. --truncate int Specify the Splunk truncate limit. The default value is 10000 . 7.6.3.7. roxctl declarative-config create notifier generic Create a declarative configuration for a generic notifier. Usage USD roxctl declarative-config create notifier generic [flags] Table 7.50. Options Option Description --audit-logging Enable audit logging. The default value is false . --extra-fields stringToString Specify additional fields as comma-separated key=value pairs. The default value is [] . --headers stringToString Specify headers as comma-separated key=value pairs. The default value is [] . --webhook-cacert-file string Specify the file name of the endpoint CA certificate in PEM format. --webhook-endpoint string Specify the URL of the webhook endpoint. --webhook-password string Specify the password for basic authentication of the webhook endpoint. No authentication if not specified. Requires --webhook-username . --webhook-skip-tls-verify Skip webhook TLS verification. The default value is false . --webhook-username string Specify the username for basic authentication of the webhook endpoint. No authentication occurs if not specified. Requires --webhook-password . 7.6.3.8. roxctl declarative-config create auth-provider iap Create a declarative configuration for an authentication provider with the identity-aware proxy (IAP) identifier. Usage USD roxctl declarative-config create auth-provider iap [flags] Table 7.51. Options Option Description --audience string Specify the target group that you want to validate. 7.6.3.9. roxctl declarative-config create auth-provider oidc Create a declarative configuration for an OpenID Connect (OIDC) authentication provider. Usage USD roxctl declarative-config create auth-provider oidc [flags] Table 7.52. Options Option Description --claim-mappings stringToString Specify a list of non-standard claims from the identity provider (IdP) token that you want to include in the authentication provider's rules. The default value is [] . --client-id string Specify the client ID of the OIDC client. --client-secret string Specify the client secret of the OIDC client. --disable-offline-access Disable the request for the offline_access from the OIDC IdP. You need to use this option if the OIDC IdP limits the number of sessions with the offline_access scope. The default value is false . --issuer string Specify the issuer of the OIDC client. --mode string Specify the callback mode that you want to use. Valid values include auto , post , query and fragment . The default value is auto . 7.6.3.10. roxctl declarative-config create auth-provider saml Create a declarative configuration for a SAML authentication provider. Usage USD roxctl declarative-config create auth-provider saml [flags] Table 7.53. Options Option Description --idp-cert string Specify the file containing the SAML identity provider (IdP) certificate in PEM format. --idp-issuer string Specify the issuer of the IdP. --metadata-url string Specify the metadata URL of the service provider. --name-id-format string Specify the format of the name ID. --sp-issuer string Specify the issuer of the service provider. --sso-url string Specify the URL of the IdP for single sign-on (SSO). 7.6.3.11. roxctl declarative-config create auth-provider userpki Create a declarative configuration for an user PKI authentication provider. Usage USD roxctl declarative-config create auth-provider userpki [flags] Table 7.54. Options Option Description --ca-file string Specify the file containing the certification authorities in PEM format. 7.6.3.12. roxctl declarative-config create auth-provider openshift-auth Create a declarative configuration for an OpenShift Container Platform OAuth authentication provider. Usage USD roxctl declarative-config create auth-provider openshift-auth [flags] 7.7. roxctl deployment Commands related to deployments. Usage USD roxctl deployment [command] [flags] Table 7.55. Available commands Command Description check Check the deployments for violations of the deployment time policy. Table 7.56. Options Option Description -t , --timeout duration Set the timeout for API requests. This option represents the maximum duration of a request. The default value is 10m0s . 7.7.1. roxctl deployment command options inherited from the parent command The roxctl deployment command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl deployment command. 7.7.2. roxctl deployment check Check deployments for violations of the deployment time policy. Usage USD roxctl deployment check [flags] Table 7.57. Options Option Description -c , --categories strings Define the policy categories that you want to execute. By default, all policy categories are executed. --cluster string Set the cluster name or ID that you want to use as the context for the evaluation to enable extended deployments with cluster-specific information. --compact-output Print the JSON output in compact form. The default value is false . -f , --file stringArray Specify the YAML files to send to Central for policy evaluation. --force Bypass the Central cache for images and force a new pull from Scanner. The default value is false . --headers strings Define headers that you want to print in the tabular output. The default values include POLICY , SEVERITY , BREAKS DEPLOY , DEPLOYMENT , DESCRIPTION , VIOLATION , and REMEDIATION . --headers-as-comments Print headers as comments in the CSV tabular output. The default value is false . --junit-suite-name string Set the name of the JUnit test suite. The default value is deployment-check . --merge-output Merge duplicate cells in the tabular output. The default value is false . -n , --namespace string Specify a namespace to enhance deployments with context information such as network policies, RBACs and services for deployments that do not have a namespace in their specification. The namespace defined in the specification is not changed. The default value is default . --no-header Do not print headers for a tabular output. The default value is false . -o , --output string Choose the output format. Output formats include json , junit , sarif , table , and csv . The default value is table . -r , --retries int Set the number of retries before exiting as an error. The default value is 3 . -d , --retry-delay int Set the time to wait between retries in seconds. The default value is 3 . --row-jsonpath-expressions string Define the JSON path expressions to create a row from the JSON object. For more details, run the roxctl deployment check --help command. 7.8. roxctl helm Commands related to Red Hat Advanced Cluster Security for Kubernetes (RHACS) Helm Charts. Usage USD roxctl helm [command] [flags] Table 7.58. Available commands Command Description derive-local-values Derive local Helm values from the cluster configuration. output Output a Helm chart. 7.8.1. roxctl helm command options inherited from the parent command The roxctl helm command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl helm command. 7.8.2. roxctl helm output Output a Helm chart. Usage USD roxctl helm output <central_services or secured_cluster_services> [flags] 1 1 For <central_services or secured_cluster_services> , specify the path to either the central services or the secured cluster services to generate a Helm chart output. Table 7.59. Options Option Description --debug Read templates from the local filesystem. The default value is false . --debug-path string Specify the path to the Helm templates on your local filesystem. For more details, run the roxctl helm output --help command. --image-defaults string Set the default container image settings. Image settings include development_build , stackrox.io , rhacs , and opensource . It influences repositories for image downloads, image names, and tag formats. The default value is development_build . --output-dir string Define the path to the output directory for the Helm chart. The default path is ./stackrox-<chart name>-chart . --remove Remove the output directory if it already exists. The default value is false . 7.8.3. roxctl helm derive-local-values Derive local Helm values from the cluster configuration. Usage USD roxctl helm derive-local-values --output <path> \ 1 <central_services> [flags] 2 1 For the <path> , specify the path where you want to save the generated local values file. 2 For the <central_services> , specify the path to the central services configuration file. Table 7.60. Options Option Description --input string Specify the path to the file or directory containing the YAML input. --output string Define the path to the output file. --output-dir string Define the path to the output directory. --retry-timeout duration Set the timeout after which API requests are retried. The timeout value indicates that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 7.9. roxctl image Commands that you can run on a specific image. Usage USD roxctl image [command] [flags] Table 7.61. Available commands Command Description check Check images for build time policy violations, and report them. sbom Generate an SPDX 2.3 SBOM from an image scan. You must have write permissions for the Image resource. scan Scan the specified image, and return the scan results. Table 7.62. Options -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 10m0s . 7.9.1. roxctl image command options inherited from the parent command The roxctl image command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl image command. 7.9.2. roxctl image sbom Generate an SPDX 2.3 SBOM from an image scan. You must have write permissions for the Image resource. Usage USD roxctl image sbom [flags] Table 7.63. Options Option Description -f, --force Bypass Central's cache for the image and force a new pull from the scanner. The default is false . -d, --retry-delay integer Sets the time to wait between retries in seconds. The default is 3. -i, --image string Image name and reference, for example, nginx:latest or nginx@sha256:... . -r, --retries integer Sets the number of times that Scanner V4 should retry before exiting with an error. The default is 3. 7.9.3. roxctl image scan Scan the specified image, and return the scan results. Usage USD roxctl image scan [flags] Table 7.64. Options Option Description --cluster string Specify the cluster name or ID to which you want to delegate the image scan. --compact-output Print JSON output in a compact format. The default value is false . --fail Fail if vulnerabilities have been found. The default value is false . -f , --force Ignore Central's cache and force a fresh re-pull from Scanner. The default value is false . --headers strings Specify the headers to print in a tabular output. The default values include COMPONENT , VERSION , CVE , SEVERITY , and LINK . --headers-as-comments Print headers as comments in a CSV tabular output. The default value is false . -i , --image string Specify the image name and reference to scan. For example, nginx:latest or nginx@sha256:... . -a , --include-snoozed Include snoozed and unsnoozed CVEs in the scan results. The default value is false . --merge-output Merge duplicate cells in a tabular output. The default value is true . --no-header Do not print headers for a tabular output. The default value is false . -o , --output string Specify the output format. Output formats include table , csv , json , and sarif . -r , --retries int Specify the number of retries before exiting as an error. The default value is 3 . -d , --retry-delay int Set the time to wait between retries in seconds. The default value is 3 . --row-jsonpath-expressions string Specify JSON path expressions to create a row from the JSON object. For more details, run the roxctl image scan --help command. --severity strings List of severities to include in the output. Use this to filter for specific severities. The default values include LOW , MODERATE , IMPORTANT , and CRITICAL . 7.9.4. roxctl image check Check images for build time policy violations, and report them. Usage USD roxctl image check [flags] Table 7.65. Options Option Description -c , --categories strings List of the policy categories that you want to execute. By default, all the policy categories are used. --cluster string Define the cluster name or ID that you want to use as the context for evaluation. --compact-output Print JSON output in a compact format. The default value is false . -f , --force Bypass the Central cache for the image and force a new pull from the Scanner. The default value is false . --headers strings Define headers to print in a tabular output. The default values include POLICY , SEVERITY , BREAKS BUILD , DESCRIPTION , VIOLATION , and REMEDIATION . --headers-as-comments Print headers as comments in a CSV tabular output. The default value is false . -i , --image string Specify the image name and reference. For example, nginx:latest or nginx@sha256:... ) . --junit-suite-name string Set the name of the JUnit test suite. Default value is image-check . --merge-output Merge duplicate cells in a tabular output. The default value is false . --no-header Do not print headers for a tabular output. The default value is false . -o , --output string Choose the output format. Output formats include junit , sarif , table , csv , and json . The default value is table . -r , --retries int Set the number of retries before exiting as an error. The default value is 3 . -d , --retry-delay int Set the time to wait between retries in seconds. The default value is 3 . --row-jsonpath-expressions string Create a row from the JSON object by using JSON path expression. For more details, run the roxctl image check --help command. --send-notifications Define whether you want to send notifications in the event of violations. The default value is false . 7.10. roxctl netpol Commands related to the network policies. Usage USD roxctl netpol [command] [flags] Table 7.66. Available commands Command Description connectivity Connectivity analysis of the network policy resources. generate Recommend network policies based on the deployment information. 7.10.1. roxctl netpol command options inherited from the parent command The roxctl netpol command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl netpol command. 7.10.2. roxctl netpol generate Recommend network policies based on the deployment information. Usage USD roxctl netpol generate <folder_path> [flags] 1 1 For <folder_path> , specify the path to the directory containing your Kubernetes deployment and service configuration files. Table 7.67. Options Option Description --dnsport <int_or_string> Specify the DNS port or a named port that you want to use in the egress rules of synthesized network policies. For example: roxctl netpol generate --dnsport 5353 <other_options> roxctl netpol generate --dnsport foo-dns <other_options> . --fail Fail on the first encountered error. The default value is false . -d , --output-dir string Save generated policies into the target folder. -f , --output-file string Save and merge generated policies into a single YAML file. --remove Remove the output path if it already exists. The default value is false . --strict Treat warnings as errors. The default value is false . Note If you do not specify a port, the roxctl netpol generate command uses port 53 for DNS connections. If you are using OpenShift Container Platform, you might need to change the port when generating network policies by using the roxctl CLI. If you do not change the port, OpenShift Container Platform uses port 5353 and assigns the name dns for this port automatially. You can use the --dnsport option to override the default DNS port. For example: roxctl netpol generate --dnsport 5353 <other_options> roxctl netpol generate --dnsport dns <other-options> . 7.10.3. roxctl netpol connectivity Commands related to the connectivity analysis of the network policy resources. Usage USD roxctl netpol connectivity [flags] 7.10.3.1. roxctl netpol connectivity map Analyze connectivity based on the network policies and other resources. Usage USD roxctl netpol connectivity map <folder_path> [flags] 1 1 For <folder_path> , specify the path to the directory containing your Kubernetes deployment and service configuration files. Table 7.68. Options Option Description --exposure Enhance the analysis of permitted connectivity by using exposure analysis. The default value is false . --fail Fail on the first encountered error. The default value is false . --focus-workload string Focus on connections of the specified workload name in the output. -f , --output-file string Save the connections list output into a specific file. -o , --output-format string Configure the connections list in a specific format. Supported formats include txt , json , md , dot , and csv . The default value is txt . --remove Remove the output path if it already exists. The default value is false . --save-to-file Define whether you want to save the output of the connection list in the default file. The default value is false . --strict Treat warnings as errors. The default value is false . 7.10.3.2. roxctl netpol connectivity diff Report connectivity differences based on two network policy directories and YAML manifests with workload resources. Usage USD roxctl netpol connectivity diff [flags] Table 7.69. Options Option Description --dir1 string Specify the first directory path of the input resources. This value is mandatory. --dir2 string Specify the second directory path of the input resources that you want to compare with the first directory path. This value is mandatory. --fail Fail on the first encounter. The default value is false . -f , --output-file string Save the output of the connectivity difference command into a specific file. -o , --output-format string Configure the output of the connectivity difference command in a specific format. Supported formats include txt , md , csv . The default value is txt .. --remove Remove the output path if it already exists. The default value is false . --save-to-file Define whether you want to store the output of the connectivity differences in the default file. The default value is false . --strict Treat warnings as errors. The default value is false . 7.11. roxctl scanner Commands related to the StackRox Scanner and Scanner V4 services. Usage USD roxctl scanner [command] [flags] Table 7.70. Available commands Command Description download-db Download the offline vulnerability database for StackRox Scanner and Scanner V4. generate Generate the required YAML configuration files to deploy the StackRox Scanner and Scanner V4. upload-db Upload a vulnerability database for the StackRox Scanner and Scanner V4. 7.11.1. roxctl scanner command options inherited from the parent command The roxctl scanner command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl scanner command. 7.11.2. roxctl scanner generate Generate the required YAML configuration files to deploy Scanner. Usage USD roxctl scanner generate [flags] Table 7.71. Options Option Description --cluster-type cluster type Specify the type of cluster on which you want to run Scanner. Cluster types include k8s and openshift . The default value is k8s . --enable-pod-security-policies Create PodSecurityPolicy resources. The default value is true . --istio-support string Generate deployment files that support the specified Istio version. Valid versions include 1.0 , 1.1 , 1.2 , 1.3 , 1.4 , 1.5 , 1.6 , and 1.7 . --output-dir string Specify the output directory for the Scanner bundle. Leave blank to use the default value. --retry-timeout duration Set the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . --scanner-image string Specify the Scanner image that you want to use. Leave blank to use the server default. -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 7.11.3. roxctl scanner upload-db Upload a vulnerability database for Scanner. Usage USD roxctl scanner upload-db [flags] Table 7.72. Options Option Description --scanner-db-file string Specify the file containing the dumped Scanner definitions DB. -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 10m0s . 7.11.4. roxctl scanner download-db Download the offline vulnerability database for StackRox Scanner or Scanner V4. This command downloads version-specific offline vulnerability bundles. The system contacts Central to determine the version if one is not specified. If communication fails, the download defaults to the version embedded within roxctl . By default, it will attempt to download the database for the determined version and less-specific variants. For example, if version 4.4.1-extra is specified, downloads will be attempted for the following version variants: 4.4.1-extra 4.4.1 4.4 Usage USD roxctl scanner download-db [flags] Table 7.73. Options Option Description --force Force overwriting the output file if it already exists. The default value is false . --scanner-db-file string Output file to save the vulnerability database to. The default value is the name and path of the remote file that is downloaded. --skip-central Do not contact Central when detecting the version. The default value is false . --skip-variants Do not attempt to process variants of the determined version. The default value is false . -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 10m0s . --version string Download a specific version or version variant of the vulnerability database. By default, the version is automatically detected. 7.12. roxctl sensor Deploy Red Hat Advanced Cluster Security for Kubernetes (RHACS) services in secured clusters. Usage USD roxctl sensor [command] [flags] Table 7.74. Available commands Command Description generate Generate files to deploy RHACS services in secured clusters. generate-certs Download a YAML file with renewed certificates for Sensor, Collector, and Admission controller. get-bundle Download a bundle with the files to deploy RHACS services in a cluster. Table 7.75. Options Option Description --retry-timeout duration Set the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 7.12.1. roxctl sensor command options inherited from the parent command The roxctl sensor command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl sensor command. 7.12.2. roxctl sensor generate Generate files to deploy RHACS services in secured clusters. Usage USD roxctl sensor generate [flags] Table 7.76. Options Option Description --admission-controller-disable-bypass Disable the bypass annotations for the admission controller. The default value is false . --admission-controller-enforce-on-creates Dynamic enable for enforcing on object creation in the admission controller. The default value is false . --admission-controller-enforce-on-updates Enable dynamic enforcement of object updates in the admission controller. The default value is false . --admission-controller-listen-on-creates Configure the admission controller webhook to listen to deployment creation. The default value is false . --admission-controller-listen-on-updates Configure the admission controller webhook to listen to deployment updates. The default value is false . --admission-controller-scan-inline Get scans inline when using the admission controller. The default value is false . --admission-controller-timeout int32 Set the timeout in seconds for the admission controller. The default value is 3 . --central string Set the endpoint to which you want to connect Sensor. The default value is central.stackrox:443 . --collection-method collection method Specify the collection method that you want to use for runtime support. Collection methods include none , default , ebpf and core_bpf . The default value is default . --collector-image-repository string Set the image repository that you want to use to deploy Collector. If not specified, a default value corresponding to the effective --main-image repository value is derived. --continue-if-exists Continue with downloading the sensor bundle even if the cluster already exists. The default value is false . --create-upgrader-sa Decide whether to create the upgrader service account with cluster-admin privileges to facilitate automated sensor upgrades. The default value is true . --disable-tolerations Disable tolerations for tainted nodes. The default value is false . --enable-pod-security-policies Create PodSecurityPolicy resources. The default value is true . --istio-support string Generate deployment files that support the specified Istio version. Valid versions include 1.0 , 1.1 , 1.2 , 1.3 , 1.4 , 1.5 , 1.6 , 1.7 . --main-image-repository string Specify the image repository that you want to use to deploy Sensor. If not specified, a default value is used. --name string Set the cluster name to identify the cluster. --output-dir string Set the output directory for the bundle contents. The default value is an automatically generated directory name inside the current directory. --slim-collector string[="true"] Use Collector-slim in the deployment bundle. Valid values include auto , true , and false . The default value is auto . -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 5m0s . 7.12.2.1. roxctl sensor generate k8s Generate the required files to deploy RHACS services in a Kubernetes cluster. Usage USD roxctl sensor generate k8s [flags] Table 7.77. Options Option Description --admission-controller-listen-on-events Enable admission controller webhook to listen to Kubernetes events. The default value is true . 7.12.2.2. roxctl sensor generate openshift Generate the required files to deploy RHACS services in a Red Hat OpenShift cluster. Usage USD roxctl sensor generate openshift [flags] Table 7.78. Options Option Description `--admission-controller-listen-on-events false true auto[=true]` Enable or disable the admission controller webhook to listen to Kubernetes events . The default value is auto . `--disable-audit-logs false true auto[=true]` Enable or disable audit log collection for runtime detection. The default value is auto . --openshift-version int Specify the Red Hat OpenShift major version for which you want to generate the deployment files. 7.12.3. roxctl sensor get-bundle Download a bundle with the files to deploy RHACS services into a cluster. Usage USD roxctl sensor get-bundle <cluster_details> [flags] 1 1 For <cluster_details> , specify the cluster name or ID. Table 7.79. Options Option Description --create-upgrader-sa Specify whether to create the upgrader service account with cluster-admin privileges for automated Sensor upgrades. The default value is true . --istio-support string Generate deployment files that support the specified Istio version. Valid versions include 1.0 , 1.1 , 1.2 , 1.3 , 1.4 , 1.5 , 1.6 , and 1.7 . --output-dir string Specify the output directory for the bundle contents. The default value is an automatically generated directory name inside the current directory. --slim-collector string[="true"] Use Collector-slim in the deployment bundle. Valid values include auto , true and false . The default value is auto . -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 5m0s . 7.12.4. roxctl sensor generate-certs Download a YAML file with renewed certificates for Sensor, Collector, and Admission controller. Usage USD roxctl sensor generate-certs <cluster_details> [flags] 1 1 For <cluster_details> , specify the cluster name or ID. Table 7.80. Options Option Description --output-dir string Specify the output directory for the YAML file. The default value is . . 7.13. roxctl version Display the current roxctl version. Usage USD roxctl version [flags] 7.13.1. roxctl version command options The roxctl version command supports the following option: Option Description --json Display the extended version information as JSON. The default value is false . 7.13.2. roxctl version command options inherited from the parent command The roxctl version command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. | [
"roxctl [command] [flags]",
"roxctl central [command] [flags]",
"roxctl central backup [flags]",
"roxctl central cert [flags]",
"roxctl central crs [command] [flags]",
"roxctl central crs generate <crs name> [flags]",
"roxctl central crs list [flags]",
"roxctl central crs revoke <CRS unique identifier or name> [flags]",
"roxctl central login [flags]",
"roxctl central whoami [flags]",
"roxctl central db [flags]",
"roxctl central db restore <file> [flags] 1",
"roxctl central db generate [flags]",
"roxctl central db generate k8s [flags]",
"roxctl central db restore cancel [flags]",
"roxctl central db restore status [flags]",
"roxctl central db generate k8s pvc [flags]",
"roxctl central db generate openshift [flags]",
"roxctl central db generate k8s hostpath [flags]",
"roxctl central db generate openshift pvc [flags]",
"roxctl central db generate openshift hostpath [flags]",
"roxctl central debug [flags]",
"roxctl central debug db [flags]",
"roxctl central debug log [flags]",
"roxctl central debug dump [flags]",
"roxctl central debug db stats [flags]",
"roxctl central debug authz-trace [flags]",
"roxctl central debug db stats reset [flags]",
"roxctl central debug download-diagnostics [flags]",
"roxctl central generate [flags]",
"roxctl central generate k8s [flags]",
"roxctl central generate k8s pvc [flags]",
"roxctl central generate openshift [flags]",
"roxctl central generate interactive [flags]",
"roxctl central generate k8s hostpath [flags]",
"roxctl central generate openshift pvc [flags]",
"roxctl central generate openshift hostpath [flags]",
"roxctl central init-bundles [flag]",
"roxctl central init-bundles list [flags]",
"roxctl central init-bundles revoke <init_bundle_ID or name> [<init_bundle_ID or name> ...] [flags] 1",
"roxctl central init-bundles fetch-ca [flags]",
"roxctl central init-bundles generate <init_bundle_name> [flags] 1",
"roxctl central userpki [flags]",
"roxctl central userpki list [flags]",
"roxctl central userpki create name [flags]",
"roxctl central userpki delete id|name [flags]",
"roxctl cluster [command] [flags]",
"roxctl cluster delete [flags]",
"roxctl collector [command] [flags]",
"roxctl collector support-packages [flags]",
"roxctl collector support-packages upload [flags]",
"roxctl completion [bash|zsh|fish|powershell]",
"roxctl declarative-config [command] [flags]",
"roxctl declarative-config lint [flags]",
"roxctl declarative-config create [flags]",
"roxctl declarative-config create role [flags]",
"roxctl declarative-config create notifier [flags]",
"roxctl declarative-config create access-scope [flags]",
"roxctl declarative-config create auth-provider [flags]",
"roxctl declarative-config create permission-set [flags]",
"roxctl declarative-config create notifier splunk [flags]",
"roxctl declarative-config create notifier generic [flags]",
"roxctl declarative-config create auth-provider iap [flags]",
"roxctl declarative-config create auth-provider oidc [flags]",
"roxctl declarative-config create auth-provider saml [flags]",
"roxctl declarative-config create auth-provider userpki [flags]",
"roxctl declarative-config create auth-provider openshift-auth [flags]",
"roxctl deployment [command] [flags]",
"roxctl deployment check [flags]",
"roxctl helm [command] [flags]",
"roxctl helm output <central_services or secured_cluster_services> [flags] 1",
"roxctl helm derive-local-values --output <path> \\ 1 <central_services> [flags] 2",
"roxctl image [command] [flags]",
"roxctl image sbom [flags]",
"roxctl image scan [flags]",
"roxctl image check [flags]",
"roxctl netpol [command] [flags]",
"roxctl netpol generate <folder_path> [flags] 1",
"roxctl netpol connectivity [flags]",
"roxctl netpol connectivity map <folder_path> [flags] 1",
"roxctl netpol connectivity diff [flags]",
"roxctl scanner [command] [flags]",
"roxctl scanner generate [flags]",
"roxctl scanner upload-db [flags]",
"roxctl scanner download-db [flags]",
"roxctl sensor [command] [flags]",
"roxctl sensor generate [flags]",
"roxctl sensor generate k8s [flags]",
"roxctl sensor generate openshift [flags]",
"roxctl sensor get-bundle <cluster_details> [flags] 1",
"roxctl sensor generate-certs <cluster_details> [flags] 1",
"roxctl version [flags]"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/roxctl_cli/roxctl-cli-command-reference |
Chapter 5. Installing a three-node cluster on AWS | Chapter 5. Installing a three-node cluster on AWS In OpenShift Container Platform version 4.18, you can install a three-node cluster on Amazon Web Services (AWS). A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. Note Deploying a three-node cluster using an AWS Marketplace image is not supported. 5.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 5.2. steps Installing a cluster on AWS with customizations Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates | [
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_aws/installing-aws-three-node |
Chapter 2. Launching applications in GNOME | Chapter 2. Launching applications in GNOME You can launch installed applications using several different methods in the GNOME desktop environment. 2.1. Launching an application in the standard GNOME session This procedure launches a graphical application in the GNOME desktop environment. Prerequisites You are using the standard GNOME session. Procedure Open the Activities Overview screen using either of the following ways: Click Activities in the top panel. Press the Super key, which is usually labeled with the Windows logo, ⌘ , or 🔍 . Find the application using either of the following ways: Click the Show Applications icon in the bottom horizontal bar. Type the name of the required application in the search entry. Click the application in the displayed list. 2.2. Launching an application in GNOME Classic This procedure launches a graphical application in the GNOME Classic desktop environment. Prerequisites You are using the GNOME Classic session. Procedure Open the Applications menu in the top panel. Choose the required application from the available categories, which can include: Favorites Accessories Graphics Internet Office Sound & Video System Tools Utilities 2.3. Launching an application in GNOME using a command This procedure launches a graphical application in GNOME by entering a command. Prerequisites You know the command that starts the application. Procedure Open a command prompt using either of the following ways: Open a terminal. Press the Alt + F2 shortcut to open the Enter a Command screen. Type the application command in the command prompt. Confirm the command by pressing Enter . 2.4. Launching an application automatically on login You can set applications to launch automatically on login using the Tweaks tool. Tweaks is a tool to customize the GNOME Shell environment for a particular user. Prerequisites You have installed gnome-tweaks on your system. For more details, see Installing software in GNOME You have installed the application that you want to launch at login. Procedure Open Tweaks . For more details see Launching applications in GNOME . Select Startup Applications in the left side bar. Click the plus sign button ( + ). Select an application from the list of available applications and click Add . Verification Open Tweaks . Select Startup Applications in the left side bar. List of applications launched at start will be present in the center section. Additional resources For more information about lauching applications, see Launching applications in GNOME | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/getting_started_with_the_gnome_desktop_environment/assembly_launching-applications-in-gnome_getting-started-with-the-gnome-desktop-environment |
Chapter 6. Managing DNS by using Capsule | Chapter 6. Managing DNS by using Capsule Satellite can manage DNS records by using your Capsule. DNS management contains updating and removing DNS records from existing DNS zones. A Capsule has multiple DNS providers that you can use to integrate Satellite with your existing DNS infrastructure or deploy a new one. After you have enabled DNS, your Capsule can manipulate any DNS server that complies with RFC 2136 by using the dns_nsupdate provider. Other providers provide more direct integration, such as dns_infoblox for Infoblox . Available DNS providers dhcp_infoblox - For more information, see Chapter 7, Using Infoblox as DHCP and DNS providers . dns_nsupdate - Dynamic DNS update using nsupdate. For more information, see Section 6.1, "Configuring dns_nsupdate" . dns_nsupdate_gss - Dynamic DNS update with GSS-TSIG. For more information, see Section 4.4.1, "Configuring dynamic DNS update with GSS-TSIG authentication" . 6.1. Configuring dns_nsupdate The dns_nsupdate DNS provider manages DNS records using the nsupdate utility. You can use dns_nsupdate with any DNS server compatible with RFC2136 . By default, dns_nsupdate installs the ISC BIND server. For installation without ISC BIND, see Section 4.1, "Configuring Capsule Server with external DNS" . Procedure Configure dns_nsupdate : | [
"satellite-installer --foreman-proxy-dns true --foreman-proxy-dns-provider nsupdate --foreman-proxy-dns-managed true --foreman-proxy-dns-zone example.com --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_capsule_server/managing-dns-by-using-capsule |
Scalability and performance | Scalability and performance OpenShift Container Platform 4.13 Scaling your OpenShift Container Platform cluster and tuning performance in production environments Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/scalability_and_performance/index |
Chapter 7. Known issues | Chapter 7. Known issues The known issues for running .NET on Red Hat Enterprise Linux (RHEL) include the following: it does not run on earlier versions of RHEL. dotnet dev-certs https --trust does not work on RHEL. .NET supports the creation of HTTPS certificate through dotnet dev-certs https , but it does not support trusting them through dotnet dev-certs https --trust . The client that connects to the ASP.NET Core application, such as curl or Firefox, will warn about the untrusted self-signed certificate. To work around this in a browser such as Firefox, ignore the warning and trust the certificate explicitly when the warning about the untrusted certificate comes up. Command-line tools support flags to ignore untrusted certificates. For curl , use the --insecure flag. For wget , use the --no-check-certificate flag. There are no NuGet packages for s390x on nuget.org. Using the rhel.8-s390x or linux-s390x runtime identifier can cause some dotnet commands to fail when they try to obtain these packages. These commands are either not fully supported on s390x as described in the other known issues, or the issue can be fixed by not specifying the runtime identifier. Single file applications are not supported on s390x . PublishReadyToRun/crossgen is not supported on s390x . .NET 6.0 on s390x does not understand memory and cpu limits in containers. In such environments, it is possible that .NET 6.0 will try to use more memory than allocated to the container, causing the container to get killed or restarted in OpenShift Container Platform. As a workaround you can manually specify a heap limit through an environment variable: MONO_GC_PARAMS=max-heap-size=<limit> . You should set the limit to 75% of the memory allocated to the container. For example, if the container memory limit is 300MB, set MONO_GC_PARAMS=max-heap-size=225M . The default version of the Microsoft.NET.Test.Sdk package in the test project templates ( xunit , nunit , mstest ) is unusable on s390x . Trying to build/run tests will fail with a "System.NotSupportedException: Specified method is not supported" exception. If you are trying to run test on s390x , update the version of the Microsoft.NET.Test.Sdk package to at least 17.0.0. OmniSharp, the language server used by IDEs like Visual Studio Code, is not available on s390x . RHEL 9 has disabled several weak security algorithms to improve security. Some .NET APIs using these algorithms will fail at runtime with CryptographicExceptions. If you really must use the weak algorithms and risk compromising security, you can loosen the system's security policies by using: or For more information, see the "Security" section in the overview of major changes in the RHEL 9 release notes . Strong Naming will not work out of the box on RHEL 9. RHEL 9 has disabled the use of SHA-1 in the default configuration. .NET uses SHA-1+RSA to identify assemblies that have been signed with a strong name. The explicit SHA-1+RSA algorithm combination is a part of the ECMA-335 specification involving strong naming. However, given the recent attacks against SHA-1, RHEL 9 has deprecated the use of SHA-1 (when combined with RSA) to improve security across the entire operating system. This means that any use of strong naming, including verification at build time, will fail. The OpenSSL errors on RHEL 9 will indicate an invalid digest algorithm. For example: There are several possible workarounds: Enable support for SHA-1+RSA, by loosening the system's security policies: Note This will not work when FIPS is enabled. In FIPS mode, SHA-1 is completely disallowed. Switch to Public Signing . In order to do this, you must modify the project files to set up a number of properties: The NTLM technology is considered insecure in RHEL 9. The gss-ntlmssp package, which provides NTLM authentication support, has been removed from RHEL 9. That means .NET in RHEL 9 can not authenticate against NTLM. If you use NLTM authentication, please use another mechanism to authenticate. For more details, see the Identity Management section of Considerations in Adopting RHEL 9 . | [
"update-crypto-policies --set DEFAULT:SHA1",
"update-crypto-policies --set LEGACY\"",
"error : Unhandled exception. Interop+Crypto+OpenSslCryptographicException: error:03000098:digital envelope routines::invalid digest",
"update-crypto-policies --set DEFAULT:SHA1",
"<PropertyGroup> <AssemblyOriginatorKeyFil>USD(MSBuildThisFileDirectory)Key.snk</AssemblyOriginatorKeyFile> <SignAssembly>true</SignAssembly> <PublicSign Condition=\"'USD(OS)' != 'Windows_NT'\">true</PublicSign> </PropertyGroup>"
]
| https://docs.redhat.com/en/documentation/net/6.0/html/release_notes_for_.net_6.0_rpm_packages/known-issues_release-notes-for-dotnet-rpms |
5.10. Migration | 5.10. Migration The Red Hat Virtualization Manager uses migration to enforce load balancing policies for a cluster. Virtual machine migration takes place according to the load balancing policy for a cluster and current demands on hosts within a cluster. Migration can also be configured to automatically occur when a host is fenced or moved to maintenance mode. The Red Hat Virtualization Manager first migrates virtual machines with the lowest CPU utilization. This is calculated as a percentage, and does not take into account RAM usage or I/O operations, except as I/O operations affect CPU utilization. If there are more than one virtual machines with the same CPU usage, the one that will be migrated first is the first virtual machine returned by the database query run by the Red Hat Virtualization Manager to determine virtual machine CPU usage. Virtual machine migration has the following limitations by default: A bandwidth limit of 52 MiBps is imposed on each virtual machine migration. A migration will time out after 64 seconds per GB of virtual machine memory. A migration will abort if progress is stalled for 240 seconds. Concurrent outgoing migrations are limited to one per CPU core per host, or 2, whichever is smaller. See Understanding live migration "migration_max_bandwidth" and "max_outgoing_migrations" parameters in vdsm.conf for details about tuning migration settings. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/migration |
Chapter 6. Kafka Connect configuration properties | Chapter 6. Kafka Connect configuration properties config.storage.topic Type: string Importance: high The name of the Kafka topic where connector configurations are stored. group.id Type: string Importance: high A unique string that identifies the Connect cluster group this worker belongs to. key.converter Type: class Importance: high Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. offset.storage.topic Type: string Importance: high The name of the Kafka topic where source connector offsets are stored. status.storage.topic Type: string Importance: high The name of the Kafka topic where connector and task status are stored. value.converter Type: class Importance: high Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. bootstrap.servers Type: list Default: localhost:9092 Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). exactly.once.source.support Type: string Default: disabled Valid Values: (case insensitive) [DISABLED, ENABLED, PREPARING] Importance: high Whether to enable exactly-once support for source connectors in the cluster by using transactions to write source records and their source offsets, and by proactively fencing out old task generations before bringing up new ones. To enable exactly-once source support on a new cluster, set this property to 'enabled'. To enable support on an existing cluster, first set to 'preparing' on every worker in the cluster, then set to 'enabled'. A rolling upgrade may be used for both changes. For more information on this feature, see the exactly-once source support documentation . heartbeat.interval.ms Type: int Default: 3000 (3 seconds) Importance: high The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than session.timeout.ms , but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. rebalance.timeout.ms Type: int Default: 60000 (1 minute) Importance: high The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures. session.timeout.ms Type: int Default: 10000 (10 seconds) Importance: high The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms . ssl.key.password Type: password Default: null Importance: high The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'. ssl.keystore.certificate.chain Type: password Default: null Importance: high Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: high Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: high The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: high The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.truststore.certificates Type: password Default: null Importance: high Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: high The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: high The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. client.dns.lookup Type: string Default: use_all_dns_ips Valid Values: [use_all_dns_ips, resolve_canonical_bootstrap_servers_only] Importance: medium Controls how the client uses DNS lookups. If set to use_all_dns_ips , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips . connections.max.idle.ms Type: long Default: 540000 (9 minutes) Importance: medium Close idle connections after the number of milliseconds specified by this config. connector.client.config.override.policy Type: string Default: All Importance: medium Class name or alias of implementation of ConnectorClientConfigOverridePolicy . Defines what client configurations can be overridden by the connector. The default implementation is All , meaning connector configurations can override all client properties. The other possible policies in the framework include None to disallow connectors from overriding client properties, and Principal to allow connectors to override only client principals. receive.buffer.bytes Type: int Default: 32768 (32 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. request.timeout.ms Type: int Default: 40000 (40 seconds) Valid Values: [0,... ] Importance: medium The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. sasl.client.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.jaas.config Type: password Default: null Importance: medium JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*; . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.service.name Type: string Default: null Importance: medium The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.login.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.mechanism Type: string Default: GSSAPI Importance: medium SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. sasl.oauthbearer.jwks.endpoint.url Type: string Default: null Importance: medium The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.token.endpoint.url Type: string Default: null Importance: medium The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization. security.protocol Type: string Default: PLAINTEXT Valid Values: (case insensitive) [SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT] Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.keystore.type Type: string Default: JKS Importance: medium The file format of the key store file. This is optional for client. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. ssl.protocol Type: string Default: TLSv1.3 Importance: medium The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.truststore.type Type: string Default: JKS Importance: medium The file format of the trust store file. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. worker.sync.timeout.ms Type: int Default: 3000 (3 seconds) Importance: medium When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining. worker.unsync.backoff.ms Type: int Default: 300000 (5 minutes) Importance: medium When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining. access.control.allow.methods Type: string Default: "" Importance: low Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD. access.control.allow.origin Type: string Default: "" Importance: low Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API. admin.listeners Type: list Default: null Valid Values: List of comma-separated URLs, ex: http://localhost:8080,https://localhost:8443 . Importance: low List of comma-separated URIs the Admin REST API will listen on. The supported protocols are HTTP and HTTPS. An empty or blank string will disable this feature. The default behavior is to use the regular listener (specified by the 'listeners' property). auto.include.jmx.reporter Type: boolean Default: true Importance: low Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters . This configuration will be removed in Kafka 4.0, users should instead include org.apache.kafka.common.metrics.JmxReporter in metric.reporters in order to enable the JmxReporter. client.id Type: string Default: "" Importance: low An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. config.providers Type: list Default: "" Importance: low Comma-separated names of ConfigProvider classes, loaded and used in the order specified. Implementing the interface ConfigProvider allows you to replace variable references in connector configurations, such as for externalized secrets. config.storage.replication.factor Type: short Default: 3 Valid Values: Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default Importance: low Replication factor used when creating the configuration storage topic. connect.protocol Type: string Default: sessioned Valid Values: [eager, compatible, sessioned] Importance: low Compatibility mode for Kafka Connect Protocol. header.converter Type: class Default: org.apache.kafka.connect.storage.SimpleHeaderConverter Importance: low HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas. inter.worker.key.generation.algorithm Type: string Default: HmacSHA256 Valid Values: Any KeyGenerator algorithm supported by the worker JVM Importance: low The algorithm to use for generating internal request keys. The algorithm 'HmacSHA256' will be used as a default on JVMs that support it; on other JVMs, no default is used and a value for this property must be manually specified in the worker config. inter.worker.key.size Type: int Default: null Importance: low The size of the key to use for signing internal requests, in bits. If null, the default key size for the key generation algorithm will be used. inter.worker.key.ttl.ms Type: int Default: 3600000 (1 hour) Valid Values: [0,... ,2147483647] Importance: low The TTL of generated session keys used for internal request validation (in milliseconds). inter.worker.signature.algorithm Type: string Default: HmacSHA256 Valid Values: Any MAC algorithm supported by the worker JVM Importance: low The algorithm used to sign internal requestsThe algorithm 'inter.worker.signature.algorithm' will be used as a default on JVMs that support it; on other JVMs, no default is used and a value for this property must be manually specified in the worker config. inter.worker.verification.algorithms Type: list Default: HmacSHA256 Valid Values: A list of one or more MAC algorithms, each supported by the worker JVM Importance: low A list of permitted algorithms for verifying internal requests, which must include the algorithm used for the inter.worker.signature.algorithm property. The algorithm(s) '[HmacSHA256]' will be used as a default on JVMs that provide them; on other JVMs, no default is used and a value for this property must be manually specified in the worker config. listeners Type: list Default: http://:8083 Valid Values: List of comma-separated URLs, ex: http://localhost:8080,https://localhost:8443 . Importance: low List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists: HTTP://myhost:8083,HTTPS://myhost:8084. metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metadata.recovery.strategy Type: string Default: none Valid Values: (case insensitive) [REBOOTSTRAP, NONE] Importance: low Controls how the client recovers when none of the brokers known to it is available. If set to none , the client fails. If set to rebootstrap , the client repeats the bootstrap process using bootstrap.servers . Rebootstrapping is useful when a client communicates with brokers so infrequently that the set of brokers may change entirely before the client refreshes metadata. Metadata recovery is triggered when all last-known brokers appear unavailable simultaneously. Brokers appear unavailable when disconnected and no current retry attempt is in-progress. Consider increasing reconnect.backoff.ms and reconnect.backoff.max.ms and decreasing socket.connection.setup.timeout.ms and socket.connection.setup.timeout.max.ms for the client. metric.reporters Type: list Default: "" Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. offset.flush.interval.ms Type: long Default: 60000 (1 minute) Importance: low Interval at which to try committing offsets for tasks. offset.flush.timeout.ms Type: long Default: 5000 (5 seconds) Importance: low Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. This property has no effect for source connectors running with exactly-once support. offset.storage.partitions Type: int Default: 25 Valid Values: Positive number, or -1 to use the broker's default Importance: low The number of partitions used when creating the offset storage topic. offset.storage.replication.factor Type: short Default: 3 Valid Values: Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default Importance: low Replication factor used when creating the offset storage topic. plugin.discovery Type: string Default: hybrid_warn Valid Values: (case insensitive) [ONLY_SCAN, SERVICE_LOAD, HYBRID_WARN, HYBRID_FAIL] Importance: low Method to use to discover plugins present in the classpath and plugin.path configuration. This can be one of multiple values with the following meanings: * only_scan: Discover plugins only by reflection. Plugins which are not discoverable by ServiceLoader will not impact worker startup. * hybrid_warn: Discover plugins reflectively and by ServiceLoader. Plugins which are not discoverable by ServiceLoader will print warnings during worker startup. * hybrid_fail: Discover plugins reflectively and by ServiceLoader. Plugins which are not discoverable by ServiceLoader will cause worker startup to fail. * service_load: Discover plugins only by ServiceLoader. Faster startup than other modes. Plugins which are not discoverable by ServiceLoader may not be usable. plugin.path Type: list Default: null Importance: low List of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The list should consist of top level directories that include any combination of: a) directories immediately containing jars with plugins and their dependencies b) uber-jars with plugins and their dependencies c) directories immediately containing the package directory structure of classes of plugins and their dependencies Note: symlinks will be followed to discover dependencies or plugins. Examples: plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors Do not use config provider variables in this property, since the raw path is used by the worker's scanner before config providers are initialized and used to replace variables. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the reconnect.backoff.max.ms value. response.http.headers.config Type: string Default: "" Valid Values: Comma-separated header rules, where each header rule is of the form '[action] [header name]:[header value]' and optionally surrounded by double quotes if any part of a header rule contains a comma Importance: low Rules for REST API HTTP response headers. rest.advertised.host.name Type: string Default: null Importance: low If this is set, this is the hostname that will be given out to other workers to connect to. rest.advertised.listener Type: string Default: null Importance: low Sets the advertised listener (HTTP or HTTPS) which will be given to other workers to use. rest.advertised.port Type: int Default: null Importance: low If this is set, this is the port that will be given out to other workers to connect to. rest.extension.classes Type: list Default: "" Importance: low Comma-separated names of ConnectRestExtension classes, loaded and called in the order specified. Implementing the interface ConnectRestExtension allows you to inject into Connect's REST API user defined resources like filters. Typically used to add custom capability like logging, security, etc. retry.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If retry.backoff.ms is set to be higher than retry.backoff.max.ms , then retry.backoff.max.ms will be used as a constant backoff from the beginning without any exponential increase. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: low Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: low Login thread sleep time between refresh attempts. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: low Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: low Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.connect.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER. sasl.login.read.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER. sasl.login.refresh.buffer.seconds Type: short Default: 300 Valid Values: [0,... ,3600] Importance: low The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Valid Values: [0,... ,900] Importance: low The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Valid Values: [0.5,... ,1.0] Importance: low Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Valid Values: [0.0,... ,0.25] Importance: low The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.oauthbearer.clock.skew.seconds Type: int Default: 30 Importance: low The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker. sasl.oauthbearer.expected.audience Type: list Default: null Importance: low The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.expected.issuer Type: string Default: null Importance: low The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.jwks.endpoint.refresh.ms Type: long Default: 3600000 (1 hour) Importance: low The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT. sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.jwks.endpoint.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.scope.claim.name Type: string Default: scope Importance: low The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. sasl.oauthbearer.sub.claim.name Type: string Default: sub Importance: low The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. scheduled.rebalance.max.delay.ms Type: int Default: 300000 (5 minutes) Valid Values: [0,... ,2147483647] Importance: low The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. socket.connection.setup.timeout.max.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Valid Values: [0,... ] Importance: low The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the socket.connection.setup.timeout.max.ms value. ssl.cipher.suites Type: list Default: null Importance: low A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.client.auth Type: string Default: none Valid Values: [required, requested, none] Importance: low Configures kafka broker to request client authentication. The following settings are common: ssl.client.auth=required If set to required client authentication is required. ssl.client.auth=requested This means client authentication is optional. unlike required, if this option is set client can choose not to provide authentication information about itself ssl.client.auth=none This means client authentication is not needed. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connections from mTLS clients to brokers due to the extra code for examining the certificate chain provided by the client. Note further that the implementation uses a custom truststore based on the standard Java truststore and thus might be considered a security risk due to not being as mature as the standard one. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: low The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.secure.random.implementation Type: string Default: null Importance: low The SecureRandom PRNG implementation to use for SSL cryptography operations. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: low The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. status.storage.partitions Type: int Default: 5 Valid Values: Positive number, or -1 to use the broker's default Importance: low The number of partitions used when creating the status storage topic. status.storage.replication.factor Type: short Default: 3 Valid Values: Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default Importance: low Replication factor used when creating the status storage topic. task.shutdown.graceful.timeout.ms Type: long Default: 5000 (5 seconds) Importance: low Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially. topic.creation.enable Type: boolean Default: true Importance: low Whether to allow automatic creation of topics used by source connectors, when source connectors are configured with topic.creation. properties. Each task will use an admin client to create its topics and will not depend on the Kafka brokers to create topics automatically. topic.tracking.allow.reset Type: boolean Default: true Importance: low If set to true, it allows user requests to reset the set of active topics per connector. topic.tracking.enable Type: boolean Default: true Importance: low Enable tracking the set of active topics per connector during runtime. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_properties/kafka-connect-configuration-properties-str |
5.6. Restricting Identity Management or SSSD to Selected Active Directory Servers or Sites in a Trusted Active Directory Domain | 5.6. Restricting Identity Management or SSSD to Selected Active Directory Servers or Sites in a Trusted Active Directory Domain As an administrator, you can disable autodiscovery of Active Directory servers and sites in the trusted Active Directory domain and instead list servers, sites, or both manually, so that you can limit the list of Active Directory servers that SSSD communicates with. For example, this enables you to avoid contacting sites that are not accessible. 5.6.1. Configuring SSSD to Contact a Specific Active Directory Server This procedure describes manually setting Active Directory servers that SSSD connects to by editing the /etc/sssd/sssd.conf file. Considerations If your SSSD clients are directly joined to an Active Directory domain, perform this procedure on all the clients. In this setup, restricting the Active Directory domain controllers (DCs) or sites also configures the SSSD clients to connect to a particular server or site for authentication. If your SSSD clients are in an Identity Management domain that is in a trust with Active Directory, perform this procedure only on the Identity Management server. In this setup, restricting the Active Directory DCs or sites does not configure the Identity Management clients to connect to a particular server or site for authentication. Although trusted Active Directory users and groups are resolved through Identity Management servers, authentication is performed directly against the Active Directory DCs. Starting with Red Hat Enterprise Linux 7.6 and sssd-1.16.2-5.el7 , you can configure SSSD on IdM clients to use a specific AD server or site using the ad_server and ad_site options. In prior versions of Red Hat Enterprise Linux 7, restrict authentication by defining the required Active Directory DCs in the /etc/krb5.conf file on the clients. Procedure Make sure the trusted domain has a separate [domain] section in sssd.conf . The headings of trusted domain sections follow this template: For example: Edit the sssd.conf file to list the host names of the Active Directory servers or sites to which you want SSSD to connect. Use the ad_server and, optionally, ad_backup_server options for Active Directory servers. Use the ad_site option for Active Directory sites. For more details on these options, see the sssd-ad (5) man page. For example: Restart SSSD. To verify, on the SSSD client, resolve or authenticate as an Active Directory user from the configured server or site. For example: If you are unable to resolve the user or authenticate, use these steps to troubleshoot the problem: In the general [domain] section of sssd.conf , set the debug_level option to 9 . Inspect the SSSD logs at /var/log/sssd/ to see which servers SSSD contacted. Additional Resources For a list of options you can use in trusted domain sections of sssd.conf , see TRUSTED DOMAIN SECTION in the sssd.conf (5) man page. | [
"[domain/ main_domain / trusted_domain ]",
"[domain/ idm.example.com / ad.example.com ]",
"[domain/ idm.example.com / ad.example.com ] ad_server = dc1.ad.example.com",
"systemctl restart sssd.service",
"id ad_user @ ad.example.com"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/restricting-ipa-or-sssd-to-selected-ad-servers-or-sites |
Chapter 54. File Systems | Chapter 54. File Systems Mounting a non-existent NFS export outputs a different error than in RHEL 6 The mount utility prints the operation not permitted error message when an NFS client is trying to mount a server export that does not exist. In Red Hat Enterprise Linux 6, the access denied message was printed in the same situation. (BZ#1428549) XFS disables per-inode DAX functionality Per-inode direct access (DAX) options are now disabled in the XFS file system due to unresolved issues with this feature. XFS now ignores existing per-inode DAX flags on the disk. You can still set file system DAX behavior using the dax mount option: (BZ#1623150) | [
"mount -o dax device mount-point"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/known_issues_file_systems |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.15/proc-providing-feedback-on-redhat-documentation |
Chapter 2. Installing Developer Hub on EKS with the Helm chart | Chapter 2. Installing Developer Hub on EKS with the Helm chart When you install the Developer Hub Helm chart in Elastic Kubernetes Service (EKS), it orchestrates the deployment of a Developer Hub instance, which provides a robust developer platform within the AWS ecosystem. Prerequisites You have an EKS cluster with AWS Application Load Balancer (ALB) add-on installed. For more information, see Application load balancing on Amazon Developer Hub and Installing the AWS Load Balancer Controller add-on . You have configured a domain name for your Developer Hub instance. The domain name can be a hosted zone entry on Route 53 or managed outside of AWS. For more information, see Configuring Amazon Route 53 as your DNS service documentation. You have an entry in the AWS Certificate Manager (ACM) for your preferred domain name. Make sure to keep a record of your Certificate ARN. You have subscribed to registry.redhat.io . For more information, see Red Hat Container Registry Authentication . You have set the context to the EKS cluster in your current kubeconfig . For more information, see Creating or updating a kubeconfig file for an Amazon EKS cluster . You have installed kubectl . For more information, see Installing or updating kubectl . You have installed Helm 3 or the latest. For more information, see Using Helm with Amazon EKS . Procedure Go to your terminal and run the following command to add the Helm chart repository containing the Developer Hub chart to your local Helm registry: helm repo add openshift-helm-charts https://charts.openshift.io/ Create a pull secret using the following command: kubectl create secret docker-registry rhdh-pull-secret \ --docker-server=registry.redhat.io \ --docker-username=<user_name> \ 1 --docker-password=<password> \ 2 --docker-email=<email> 3 1 Enter your username in the command. 2 Enter your password in the command. 3 Enter your email address in the command. The created pull secret is used to pull the Developer Hub images from the Red Hat Ecosystem. Create a file named values.yaml using the following template: global: # TODO: Set your application domain name. host: <your Developer Hub domain name> route: enabled: false upstream: service: # NodePort is required for the ALB to route to the Service type: NodePort ingress: enabled: true annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing # TODO: Using an ALB HTTPS Listener requires a certificate for your own domain. Fill in the ARN of your certificate, e.g.: alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:xxx:xxxx:certificate/xxxxxx alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' # TODO: Set your application domain name. external-dns.alpha.kubernetes.io/hostname: <your rhdh domain name> backstage: image: pullSecrets: - rhdh-pull-secret podSecurityContext: # you can assign any random value as fsGroup fsGroup: 2000 postgresql: image: pullSecrets: - rhdh-pull-secret primary: podSecurityContext: enabled: true # you can assign any random value as fsGroup fsGroup: 3000 volumePermissions: enabled: true Run the following command in your terminal to deploy Developer Hub using the latest version of Helm Chart and using the values.yaml file created in the step: helm install rhdh \ openshift-helm-charts/redhat-developer-hub \ [--version 1.4.2] \ --values /path/to/values.yaml Note For the latest chart version, see https://github.com/openshift-helm-charts/charts/tree/main/charts/redhat/redhat/redhat-developer-hub Verification Wait until the DNS name is responsive, indicating that your Developer Hub instance is ready for use. | [
"helm repo add openshift-helm-charts https://charts.openshift.io/",
"create secret docker-registry rhdh-pull-secret --docker-server=registry.redhat.io --docker-username=<user_name> \\ 1 --docker-password=<password> \\ 2 --docker-email=<email> 3",
"global: # TODO: Set your application domain name. host: <your Developer Hub domain name> route: enabled: false upstream: service: # NodePort is required for the ALB to route to the Service type: NodePort ingress: enabled: true annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing # TODO: Using an ALB HTTPS Listener requires a certificate for your own domain. Fill in the ARN of your certificate, e.g.: alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:xxx:xxxx:certificate/xxxxxx alb.ingress.kubernetes.io/listen-ports: '[{\"HTTP\": 80}, {\"HTTPS\":443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' # TODO: Set your application domain name. external-dns.alpha.kubernetes.io/hostname: <your rhdh domain name> backstage: image: pullSecrets: - rhdh-pull-secret podSecurityContext: # you can assign any random value as fsGroup fsGroup: 2000 postgresql: image: pullSecrets: - rhdh-pull-secret primary: podSecurityContext: enabled: true # you can assign any random value as fsGroup fsGroup: 3000 volumePermissions: enabled: true",
"helm install rhdh openshift-helm-charts/redhat-developer-hub [--version 1.4.2] --values /path/to/values.yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/installing_red_hat_developer_hub_on_amazon_elastic_kubernetes_service/proc-rhdh-deploy-eks-helm_title-install-rhdh-eks |
Chapter 2. Ceph network configuration | Chapter 2. Ceph network configuration As a storage administrator, you must understand the network environment that the Red Hat Ceph Storage cluster will operate in, and configure the Red Hat Ceph Storage accordingly. Understanding and configuring the Ceph network options will ensure optimal performance and reliability of the overall storage cluster. Prerequisites Network connectivity. Installation of the Red Hat Ceph Storage software. 2.1. Network configuration for Ceph Network configuration is critical for building a high performance Red Hat Ceph Storage cluster. The Ceph storage cluster does not perform request routing or dispatching on behalf of the Ceph client. Instead, Ceph clients make requests directly to Ceph OSD daemons. Ceph OSDs perform data replication on behalf of Ceph clients, which means replication and other factors impose additional loads on the networks of Ceph storage clusters. Ceph has one network configuration requirement that applies to all daemons. The Ceph configuration file must specify the host for each daemon. Some deployment utilities, such as cephadm creates a configuration file for you. Do not set these values if the deployment utility does it for you. Important The host option is the short name of the node, not its FQDN. It is not an IP address. All Ceph clusters must use a public network. However, unless you specify an internal cluster network, Ceph assumes a single public network. Ceph can function with a public network only, but for large storage clusters, you will see significant performance improvement with a second private network for carrying only cluster-related traffic. Important Red Hat recommends running a Ceph storage cluster with two networks. One public network and one private network. To support two networks, each Ceph Node will need to have more than one network interface card (NIC). There are several reasons to consider operating two separate networks: Performance: Ceph OSDs handle data replication for the Ceph clients. When Ceph OSDs replicate data more than once, the network load between Ceph OSDs easily dwarfs the network load between Ceph clients and the Ceph storage cluster. This can introduce latency and create a performance problem. Recovery and rebalancing can also introduce significant latency on the public network. Security : While most people are generally civil, some actors will engage in what is known as a Denial of Service (DoS) attack. When traffic between Ceph OSDs gets disrupted, peering may fail and placement groups may no longer reflect an active + clean state, which may prevent users from reading and writing data. A great way to defeat this type of attack is to maintain a completely separate cluster network that does not connect directly to the internet. Network configuration settings are not required. Ceph can function with a public network only, assuming a public network is configured on all hosts running a Ceph daemon. However, Ceph allows you to establish much more specific criteria, including multiple IP networks and subnet masks for your public network. You can also establish a separate cluster network to handle OSD heartbeat, object replication, and recovery traffic. Do not confuse the IP addresses you set in the configuration with the public-facing IP addresses network clients might use to access your service. Typical internal IP networks are often 192.168.0.0 or 10.0.0.0 . Note Ceph uses CIDR notation for subnets, for example, 10.0.0.0/24 . Important If you specify more than one IP address and subnet mask for either the public or the private network, the subnets within the network must be capable of routing to each other. Additionally, make sure you include each IP address and subnet in your IP tables and open ports for them as necessary. When you configured the networks, you can restart the cluster or restart each daemon. Ceph daemons bind dynamically, so you do not have to restart the entire cluster at once if you change the network configuration. Additional Resources See the common options in Red Hat Ceph Storage Configuration Guide , Appendix B for specific option descriptions and usage. 2.2. Ceph network messenger Messenger is the Ceph network layer implementation. Red Hat supports two messenger types: simple async In Red Hat Ceph Storage 5 and higher, async is the default messenger type. To change the messenger type, specify the ms_type configuration setting in the [global] section of the Ceph configuration file. Note For the async messenger, Red Hat supports the posix transport type, but does not currently support rdma or dpdk . By default, the ms_type setting in Red Hat Ceph Storage 5 or higher reflects async+posix , where async is the messenger type and posix is the transport type. SimpleMessenger The SimpleMessenger implementation uses TCP sockets with two threads per socket. Ceph associates each logical session with a connection. A pipe handles the connection, including the input and output of each message. While SimpleMessenger is effective for the posix transport type, it is not effective for other transport types such as rdma or dpdk . AsyncMessenger Consequently, AsyncMessenger is the default messenger type for Red Hat Ceph Storage 5 or higher. For Red Hat Ceph Storage 5 or higher, the AsyncMessenger implementation uses TCP sockets with a fixed-size thread pool for connections, which should be equal to the highest number of replicas or erasure-code chunks. The thread count can be set to a lower value if performance degrades due to a low CPU count or a high number of OSDs per server. Note Red Hat does not support other transport types such as rdma or dpdk at this time. Additional Resources See the AsyncMessenger options in Red Hat Ceph Storage Configuration Guide , Appendix B for specific option descriptions and usage. See the Red Hat Ceph Storage Architecture Guide for details about using on-wire encryption with the Ceph messenger version 2 protocol. 2.3. Configuring a public network To configure Ceph networks, use the config set command within the cephadm shell. Note that the IP addresses you set in your network configuration are different from the public-facing IP addresses that network clients might use to access your service. Ceph functions perfectly well with only a public network. However, Ceph allows you to establish much more specific criteria, including multiple IP networks for your public network. You can also establish a separate, private cluster network to handle OSD heartbeat, object replication, and recovery traffic. For more information about the private network, see Configuring a private network . Note Ceph uses CIDR notation for subnets, for example, 10.0.0.0/24. Typical internal IP networks are often 192.168.0.0/24 or 10.0.0.0/24. Note If you specify more than one IP address for either the public or the cluster network, the subnets within the network must be capable of routing to each other. In addition, make sure you include each IP address in your IP tables, and open ports for them as necessary. The public network configuration allows you specifically define IP addresses and subnets for the public network. Prerequisites Installation of the Red Hat Ceph Storage software. Procedure Log in to the cephadm shell: Example Configure the public network with the subnet: Syntax Example Get the list of services in the storage cluster: Example Restart the daemons. Ceph daemons bind dynamically, so you do not have to restart the entire cluster at once if you change the network configuration for a specific daemon. Example Optional: If you want to restart the cluster, on the admin node as a root user, run systemctl command: Syntax Example Additional Resources See the common options in Red Hat Ceph Storage Configuration Guide , Appendix B for specific option descriptions and usage. 2.4. Configuring a private network Network configuration settings are not required. Ceph assumes a public network with all hosts operating on it, unless you specifically configure a cluster network, also known as a private network . If you create a cluster network, OSDs routes heartbeat, object replication, and recovery traffic over the cluster network. This can improve performance, compared to using a single network. Important For added security, the cluster network should not be reachable from the public network or the Internet. To assign a cluster network, use the --cluster-network option with the cephadm bootstrap command. The cluster network that you specify must define a subnet in CIDR notation (for example, 10.90.90.0/24 or fe80::/64). You can also configure the cluster_network after boostrap. Prerequisites Access to the Ceph software repository. Root-level access to all nodes in the storage cluster. Procedure Run the cephadm bootstrap command from the initial node that you want to use as the Monitor node in the storage cluster. Include the --cluster-network option in the command. Syntax Example To configure the cluster_network after bootstrap, run the config set command and redeploy the daemons: Log in to the cephadm shell: Example Configure the cluster network with the subnet: Syntax Example Get the list of services in the storage cluster: Example Restart the daemons. Ceph daemons bind dynamically, so you do not have to restart the entire cluster at once if you change the network configuration for a specific daemon. Example Optional: If you want to restart the cluster, on the admin node as a root user, run systemctl command: Syntax Example Additional Resources For more information about invoking cephadm bootstrap , see the Bootstrapping a new storage cluster section in the Red Hat Ceph Storage Installation Guide . 2.5. Configuring multiple public networks to the cluster When the user wants to place the Ceph Monitor daemons on hosts belonging to multiple network subnets, configuring multiple public networks to the cluster is necessary. An example of usage is a stretch cluster mode used for Advanced Cluster Management (ACM) in Metro DR for OpenShift Data Foundation. You can configure multiple public networks to the cluster during bootstrap and once bootstrap is complete. Prerequisites Before adding a host be sure that you have a running Red Hat Ceph Storage cluster. Procedure Bootstrap a Ceph cluster configured with multiple public networks. Prepare a ceph.conf file containing a mon public network section: Important At least one of the provided public networks must be configured on the current host used for bootstrap. Syntax Example This is an example with three public networks to be provided for bootstrap. Bootstrap the cluster by providing the ceph.conf file as input: Note During the bootstrap you can include any other arguments that you want to provide. Syntax Note Alternatively, an IMAGE_ID (such as, 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a ) can be used instead of IMAGE_URL . Example Add new hosts to the subnets: Note The host being added must be reachable from the host that the active manager is running on. Install the cluster's public SSH key in the new host's root user's authorized_keys file: Syntax Example Log into cephadm shell: Example Add the new host to the Ceph cluster: Syntax Example Note It is best to explicitly provide the host IP address. If an IP is not provided, then the host name is immediately resolved via DNS and that IP is used. One or more labels can also be included to immediately label the new host. For example, by default the _admin label makes cephadm maintain a copy of the ceph.conf file and a client.admin keyring file in /etc/ceph directory. Add the networks configurations for the public network parameters to a running cluster. Be sure that the subnets are separated by commas and that the subnets are listed in subnet/mask format. Syntax Example If necessary, update the mon specifications to place the mon daemons on hosts within the specified subnets. Additional Resources See Adding hosts for more details about adding hosts in the Red Hat Ceph Storage Installation Guide . See Stretch clusters for Ceph storage for more details about stretch clusters in the Red Hat Ceph Storage Administration Guide . 2.6. Verifying firewall rules are configured for default Ceph ports By default, Red Hat Ceph Storage daemons use TCP ports 6800- 7100 to communicate with other hosts in the cluster. You can verify that the host's firewall allows connection on these ports. Note If your network has a dedicated firewall, you might need to verify its configuration in addition to following this procedure. See the firewall's documentation for more information. See the firewall's documentation for more information. Prerequisites Root-level access to the host. Procedure Verify the host's iptables configuration: List active rules: Verify the absence of rules that restrict connectivity on TCP ports 6800- 7100. Example Verify the host's firewalld configuration: List ports open on the host: Syntax Example Verify the range is inclusive of TCP ports 6800- 7100. 2.7. Firewall settings for Ceph Monitor node You can enable encryption for all Ceph traffic over the network with the introduction of the messenger version 2 protocol. The secure mode setting for messenger v2 encrypts communication between Ceph daemons and Ceph clients, giving you end-to-end encryption. Messenger v2 Protocol The second version of Ceph's on-wire protocol, msgr2 , includes several new features: A secure mode encrypts all data moving through the network. Encapsulation improvement of authentication payloads. Improvements to feature advertisement and negotiation. The Ceph daemons bind to multiple ports allowing both the legacy, v1-compatible, and the new, v2-compatible, Ceph clients to connect to the same storage cluster. Ceph clients or other Ceph daemons connecting to the Ceph Monitor daemon will try to use the v2 protocol first, if possible, but if not, then the legacy v1 protocol will be used. By default, both messenger protocols, v1 and v2 , are enabled. The new v2 port is 3300, and the legacy v1 port is 6789, by default. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. Root-level access to the Ceph Monitor node. Procedure Add rules using the following example: Replace IFACE with the public network interface (for example, eth0 , eth1 , and so on). Replace IP-ADDRESS with the IP address of the public network and NETMASK with the netmask for the public network. For the firewalld daemon, execute the following commands: Additional resources See the Red Hat Ceph Storage network configuration options in Ceph network configuration options for specific option descriptions and usage. See the Red Hat Ceph Storage Architecture Guide for details about using Ceph on-wire encryption with the Ceph messenger version 2 protocol. | [
"cephadm shell",
"ceph config set mon public_network IP_ADDRESS_WITH_SUBNET",
"ceph config set mon public_network 192.168.0.0/24",
"ceph orch ls",
"ceph orch restart mon",
"systemctl restart ceph- FSID_OF_CLUSTER .target",
"systemctl restart ceph-1ca9f6a8-d036-11ec-8263-fa163ee967ad.target",
"cephadm bootstrap --mon-ip IP-ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD --cluster-network NETWORK-IP-ADDRESS",
"cephadm bootstrap --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1 --cluster-network 10.10.0.0/24",
"cephadm shell",
"ceph config set global cluster_network IP_ADDRESS_WITH_SUBNET",
"ceph config set global cluster_network 10.10.0.0/24",
"ceph orch ls",
"ceph orch restart mon",
"systemctl restart ceph- FSID_OF_CLUSTER .target",
"systemctl restart ceph-1ca9f6a8-d036-11ec-8263-fa163ee967ad.target",
"[mon] public_network = PUBLIC_NETWORK1 , PUBLIC_NETWORK2",
"[mon] public_network = 10.40.0.0/24, 10.41.0.0/24, 10.42.0.0/24",
"cephadm --image IMAGE_URL bootstrap --mon-ip MONITOR_IP -c PATH_TO_CEPH_CONF",
"cephadm -image cp.icr.io/cp/ibm-ceph/ceph-5-rhel8:latest bootstrap -mon-ip 10.40.0.0/24 -c /etc/ceph/ceph.conf",
"ssh-copy-id -f -i /etc/ceph/ceph.pub root@ NEW_HOST",
"ssh-copy-id -f -i /etc/ceph/ceph.pub root@host02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host03",
"cephadm shell",
"ceph orch host add NEW_HOST IP [ LABEL1 ...]",
"ceph orch host add host02 10.10.0.102 label1 ceph orch host add host03 10.10.0.103 label2",
"ceph config set mon public_network \" SUBNET_1 , SUBNET_2 , ...\"",
"ceph config set mon public_network \"192.168.0.0/24, 10.42.0.0/24, ...\"",
"iptables -L",
"REJECT all -- anywhere anywhere reject-with icmp-host-prohibited",
"firewall-cmd --zone ZONE --list-ports",
"firewall-cmd --zone default --list-ports",
"sudo iptables -A INPUT -i IFACE -p tcp -s IP-ADDRESS / NETMASK --dport 6789 -j ACCEPT sudo iptables -A INPUT -i IFACE -p tcp -s IP-ADDRESS / NETMASK --dport 3300 -j ACCEPT",
"firewall-cmd --zone=public --add-port=6789/tcp firewall-cmd --zone=public --add-port=6789/tcp --permanent firewall-cmd --zone=public --add-port=3300/tcp firewall-cmd --zone=public --add-port=3300/tcp --permanent"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/configuration_guide/ceph-network-configuration |
H.2. Red Hat Virtualization Manager Groups | H.2. Red Hat Virtualization Manager Groups A number of system user groups are created to support Red Hat Virtualization when the rhevm package is installed. Each system user group has a default group identifier (GID). The system user groups created are: The kvm group (GID 36 ). Group members include: The vdsm user. The ovirt group (GID 108 ). Group members include: The ovirt user. The ovirt-vmconsole group (GID 498 ). Group members include: The ovirt-vmconsole user. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/red_hat_enterprise_virtualization_manager_groups |
Chapter 5. Developing Operators | Chapter 5. Developing Operators 5.1. About the Operator SDK The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators , in an effective, automated, and scalable way. Operators take advantage of Kubernetes extensibility to deliver the automation advantages of cloud services, like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run. Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication. The Operator SDK, a component of the Operator Framework, provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Dedicated. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Dedicated releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Dedicated 4 to maintain their projects and create Operator releases targeting newer versions of OpenShift Dedicated. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . Why use the Operator SDK? The Operator SDK simplifies this process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. The Operator SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code required for many common management capabilities, such as metering or monitoring. The Operator SDK is a framework that uses the controller-runtime library to make writing Operators easier by providing the following features: High-level APIs and abstractions to write the operational logic more intuitively Tools for scaffolding and code generation to quickly bootstrap a new project Integration with Operator Lifecycle Manager (OLM) to streamline packaging, installing, and running Operators on a cluster Extensions to cover common Operator use cases Metrics set up automatically in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed Operator authors with dedicated-admin access to OpenShift Dedicated can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. Note OpenShift Dedicated 4 supports Operator SDK 1.38.0. 5.1.1. What are Operators? For an overview about basic Operator concepts and terminology, see Understanding Operators . 5.1.2. Development workflow The Operator SDK provides the following workflow to develop a new Operator: Create an Operator project by using the Operator SDK command-line interface (CLI). Define new resource APIs by adding custom resource definitions (CRDs). Specify resources to watch by using the Operator SDK API. Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources. Use the Operator SDK CLI to build and generate the Operator deployment manifests. Figure 5.1. Operator SDK workflow At a high level, an Operator that uses the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application. 5.1.3. Additional resources Certified Operator Build Guide 5.2. Installing the Operator SDK CLI The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Dedicated. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Dedicated releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Dedicated 4 to maintain their projects and create Operator releases targeting newer versions of OpenShift Dedicated. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . Operator authors with dedicated-admin access to OpenShift Dedicated can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. Note OpenShift Dedicated 4 supports Operator SDK 1.38.0. 5.2.1. Installing the Operator SDK CLI on Linux You can install the OpenShift SDK CLI tool on Linux. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure Navigate to the OpenShift mirror site . From the latest 4 directory, download the latest version of the tarball for Linux. Unpack the archive: USD tar xvf operator-sdk-v1.38.0-ocp-linux-x86_64.tar.gz Make the file executable: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH . Tip To check your PATH : USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available: USD operator-sdk version Example output operator-sdk version: "v1.38.0-ocp", ... 5.2.2. Installing the Operator SDK CLI on macOS You can install the OpenShift SDK CLI tool on macOS. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure For the amd64 architecture, navigate to the OpenShift mirror site for the amd64 architecture . From the latest 4 directory, download the latest version of the tarball for macOS. Unpack the Operator SDK archive for amd64 architecture by running the following command: USD tar xvf operator-sdk-v1.38.0-ocp-darwin-x86_64.tar.gz Make the file executable by running the following command: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH by running the following command: Tip Check your PATH by running the following command: USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available by running the following command:: USD operator-sdk version Example output operator-sdk version: "v1.38.0-ocp", ... 5.3. Go-based Operators 5.3.1. Operator SDK tutorial for Go-based Operators Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Dedicated. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Dedicated releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Dedicated 4 to maintain their projects and create Operator releases targeting newer versions of OpenShift Dedicated. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . This process is accomplished using two centerpieces of the Operator Framework: Operator SDK The operator-sdk CLI tool and controller-runtime library API Operator Lifecycle Manager (OLM) Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster Note This tutorial goes into greater detail than Getting started with Operator SDK for Go-based Operators in the OpenShift Container Platform documentation. 5.3.1.1. Prerequisites Operator SDK CLI installed OpenShift CLI ( oc ) 4+ installed Go 1.21+ Logged into an OpenShift Dedicated cluster with oc with an account that has dedicated-admin permissions To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret Additional resources Installing the Operator SDK CLI Getting started with the OpenShift CLI 5.3.1.2. Creating a project Use the Operator SDK CLI to create a project called memcached-operator . Procedure Create a directory for the project: USD mkdir -p USDHOME/projects/memcached-operator Change to the directory: USD cd USDHOME/projects/memcached-operator Activate support for Go modules: USD export GO111MODULE=on Run the operator-sdk init command to initialize the project: USD operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator Note The operator-sdk init command uses the Go plugin by default. The operator-sdk init command generates a go.mod file to be used with Go modules . The --repo flag is required when creating a project outside of USDGOPATH/src/ , because generated files require a valid module path. 5.3.1.2.1. PROJECT file Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Go. For example: domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: "3" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} 5.3.1.2.2. About the Manager The main program for the Operator is the main.go file, which initializes and runs the Manager . The Manager automatically registers the Scheme for all custom resource (CR) API definitions and sets up and runs controllers and webhooks. The Manager can restrict the namespace that all controllers watch for resources: mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace}) By default, the Manager watches the namespace where the Operator runs. To watch all namespaces, you can leave the namespace option empty: mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""}) You can also use the MultiNamespacedCacheBuilder function to watch a specific set of namespaces: var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), }) 1 List of namespaces. 2 Creates a Cmd struct to provide shared dependencies and start components. 5.3.1.2.3. About multi-group APIs Before you create an API and controller, consider whether your Operator requires multiple API groups. This tutorial covers the default case of a single group API, but to change the layout of your project to support multi-group APIs, you can run the following command: USD operator-sdk edit --multigroup=true This command updates the PROJECT file, which should look like the following example: domain: example.com layout: go.kubebuilder.io/v3 multigroup: true ... For multi-group projects, the API Go type files are created in the apis/<group>/<version>/ directory, and the controllers are created in the controllers/<group>/ directory. The Dockerfile is then updated accordingly. Additional resource For more details on migrating to a multi-group project, see the Kubebuilder documentation . 5.3.1.3. Creating an API and controller Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller. Procedure Run the following command to create an API with group cache , version, v1 , and kind Memcached : USD operator-sdk create api \ --group=cache \ --version=v1 \ --kind=Memcached When prompted, enter y for creating both the resource and controller: Create Resource [y/n] y Create Controller [y/n] y Example output Writing scaffold for you to edit... api/v1/memcached_types.go controllers/memcached_controller.go ... This process generates the Memcached resource API at api/v1/memcached_types.go and the controller at controllers/memcached_controller.go . 5.3.1.3.1. Defining the API Define the API for the Memcached custom resource (CR). Procedure Modify the Go type definitions at api/v1/memcached_types.go to have the following spec and status : // MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:"size"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:"nodes"` } Update the generated code for the resource type: USD make generate Tip After you modify a *_types.go file, you must run the make generate command to update the generated code for that resource type. The above Makefile target invokes the controller-gen utility to update the api/v1/zz_generated.deepcopy.go file. This ensures your API Go type definitions implement the runtime.Object interface that all Kind types must implement. 5.3.1.3.2. Generating CRD manifests After the API is defined with spec and status fields and custom resource definition (CRD) validation markers, you can generate CRD manifests. Procedure Run the following command to generate and update CRD manifests: USD make manifests This Makefile target invokes the controller-gen utility to generate the CRD manifests in the config/crd/bases/cache.example.com_memcacheds.yaml file. 5.3.1.3.2.1. About OpenAPI validation OpenAPIv3 schemas are added to CRD manifests in the spec.validation block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached custom resource (CR) when it is created or updated. Markers, or annotations, are available to configure validations for your API. These markers always have a +kubebuilder:validation prefix. Additional resources For more details on the usage of markers in API code, see the following Kubebuilder documentation: CRD generation Markers List of OpenAPIv3 validation markers For more details about OpenAPIv3 validation schemas in CRDs, see the Kubernetes documentation . 5.3.1.4. Implementing the controller After creating a new API and controller, you can implement the controller logic. Procedure For this example, replace the generated controller file controllers/memcached_controller.go with following example implementation: Example 5.1. Example memcached_controller.go /* | [
"tar xvf operator-sdk-v1.38.0-ocp-linux-x86_64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.38.0-ocp\",",
"tar xvf operator-sdk-v1.38.0-ocp-darwin-x86_64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.38.0-ocp\",",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"export GO111MODULE=on",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: \"3\" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {}",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: \"\"})",
"var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })",
"operator-sdk edit --multigroup=true",
"domain: example.com layout: go.kubebuilder.io/v3 multigroup: true",
"operator-sdk create api --group=cache --version=v1 --kind=Memcached",
"Create Resource [y/n] y Create Controller [y/n] y",
"Writing scaffold for you to edit api/v1/memcached_types.go controllers/memcached_controller.go",
"// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 \"k8s.io/api/apps/v1\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"reflect\" \"context\" \"github.com/go-logr/logr\" \"k8s.io/apimachinery/pkg/runtime\" ctrl \"sigs.k8s.io/controller-runtime\" \"sigs.k8s.io/controller-runtime/pkg/client\" ctrllog \"sigs.k8s.io/controller-runtime/pkg/log\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues(\"memcached\", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info(\"Memcached resource not found. Ignoring since object must be deleted\") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, \"Failed to get Memcached\") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info(\"Creating a new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, \"Failed to create new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, \"Failed to get Deployment\") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, \"Failed to update Deployment\", \"Deployment.Namespace\", found.Namespace, \"Deployment.Name\", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, \"Failed to list pods\", \"Memcached.Namespace\", memcached.Namespace, \"Memcached.Name\", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, \"Failed to update Memcached status\") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: \"memcached:1.4.36-alpine\", Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: \"memcached\", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{\"app\": \"memcached\", \"memcached_cr\": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"import ( appsv1 \"k8s.io/api/apps/v1\" ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }",
"import ( ctrl \"sigs.k8s.io/controller-runtime\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) }",
"// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil",
"import \"time\" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil",
"// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { }",
"import ( \"github.com/operator-framework/operator-lib/proxy\" )",
"for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) }",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.38.0 1",
"go 1.22.0 github.com/onsi/ginkgo/v2 v2.17.1 github.com/onsi/gomega v1.32.0 k8s.io/api v0.30.1 k8s.io/apimachinery v0.30.1 k8s.io/client-go v0.30.1 sigs.k8s.io/controller-runtime v0.18.4",
"go mod tidy",
"- ENVTEST_K8S_VERSION = 1.29.0 + ENVTEST_K8S_VERSION = 1.30.0",
"- KUSTOMIZE ?= USD(LOCALBIN)/kustomize-USD(KUSTOMIZE_VERSION) - CONTROLLER_GEN ?= USD(LOCALBIN)/controller-gen-USD(CONTROLLER_TOOLS_VERSION) - ENVTEST ?= USD(LOCALBIN)/setup-envtest-USD(ENVTEST_VERSION) - GOLANGCI_LINT = USD(LOCALBIN)/golangci-lint-USD(GOLANGCI_LINT_VERSION) + KUSTOMIZE ?= USD(LOCALBIN)/kustomize + CONTROLLER_GEN ?= USD(LOCALBIN)/controller-gen + ENVTEST ?= USD(LOCALBIN)/setup-envtest + GOLANGCI_LINT = USD(LOCALBIN)/golangci-lint",
"- KUSTOMIZE_VERSION ?= v5.3.0 - CONTROLLER_TOOLS_VERSION ?= v0.14.0 - ENVTEST_VERSION ?= release-0.17 - GOLANGCI_LINT_VERSION ?= v1.57.2 + KUSTOMIZE_VERSION ?= v5.4.2 + CONTROLLER_TOOLS_VERSION ?= v0.15.0 + ENVTEST_VERSION ?= release-0.18 + GOLANGCI_LINT_VERSION ?= v1.59.1",
"- USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD{GOLANGCI_LINT_VERSION}) + USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD(GOLANGCI_LINT_VERSION))",
"- USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD{GOLANGCI_LINT_VERSION}) + USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD(GOLANGCI_LINT_VERSION))",
"- @[ -f USD(1) ] || { + @[ -f \"USD(1)-USD(3)\" ] || { echo \"Downloading USDUSD{package}\" ; + rm -f USD(1) || true ; - mv \"USDUSD(echo \"USD(1)\" | sed \"s/-USD(3)USDUSD//\")\" USD(1) ; - } + mv USD(1) USD(1)-USD(3) ; + } ; + ln -sf USD(1)-USD(3) USD(1)",
"- exportloopref + - ginkgolinter - prealloc + - revive + + linters-settings: + revive: + rules: + - name: comment-spacings",
"- FROM golang:1.21 AS builder + FROM golang:1.22 AS builder",
"\"sigs.k8s.io/controller-runtime/pkg/log/zap\" + \"sigs.k8s.io/controller-runtime/pkg/metrics/filters\" var enableHTTP2 bool - flag.StringVar(&metricsAddr, \"metrics-bind-address\", \":8080\", \"The address the metric endpoint binds to.\") + var tlsOpts []func(*tls.Config) + flag.StringVar(&metricsAddr, \"metrics-bind-address\", \"0\", \"The address the metrics endpoint binds to. \"+ + \"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.\") flag.StringVar(&probeAddr, \"health-probe-bind-address\", \":8081\", \"The address the probe endpoint binds to.\") flag.BoolVar(&enableLeaderElection, \"leader-elect\", false, \"Enable leader election for controller manager. \"+ \"Enabling this will ensure there is only one active controller manager.\") - flag.BoolVar(&secureMetrics, \"metrics-secure\", false, - \"If set the metrics endpoint is served securely\") + flag.BoolVar(&secureMetrics, \"metrics-secure\", true, + \"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.\") - tlsOpts := []func(*tls.Config){} + // Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server. + // More info: + // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/metrics/server + // - https://book.kubebuilder.io/reference/metrics.html + metricsServerOptions := metricsserver.Options{ + BindAddress: metricsAddr, + SecureServing: secureMetrics, + // TODO(user): TLSOpts is used to allow configuring the TLS config used for the server. If certificates are + // not provided, self-signed certificates will be generated by default. This option is not recommended for + // production environments as self-signed certificates do not offer the same level of trust and security + // as certificates issued by a trusted Certificate Authority (CA). The primary risk is potentially allowing + // unauthorized access to sensitive metrics data. Consider replacing with CertDir, CertName, and KeyName + // to provide certificates, ensuring the server communicates using trusted and secure certificates. + TLSOpts: tlsOpts, + } + + if secureMetrics { + // FilterProvider is used to protect the metrics endpoint with authn/authz. + // These configurations ensure that only authorized users and service accounts + // can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info: + // https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/metrics/filters#WithAuthenticationAndAuthorization + metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization + } + mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ - Scheme: scheme, - Metrics: metricsserver.Options{ - BindAddress: metricsAddr, - SecureServing: secureMetrics, - TLSOpts: tlsOpts, - }, + Scheme: scheme, + Metrics: metricsServerOptions,",
"[PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. #- ../prometheus + # [METRICS] Expose the controller manager metrics service. + - metrics_service.yaml + # Uncomment the patches line if you enable Metrics, and/or are using webhooks and cert-manager patches: - # Protect the /metrics endpoint by putting it behind auth. - # If you want your controller-manager to expose the /metrics - # endpoint w/o any authn/z, please comment the following line. - - path: manager_auth_proxy_patch.yaml + # [METRICS] The following patch will enable the metrics endpoint using HTTPS and the port :8443. + # More info: https://book.kubebuilder.io/reference/metrics + - path: manager_metrics_patch.yaml + target: + kind: Deployment",
"This patch adds the args to allow exposing the metrics endpoint using HTTPS - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-bind-address=:8443",
"apiVersion: v1 kind: Service metadata: labels: control-plane: controller-manager app.kubernetes.io/name: <operator-name> app.kubernetes.io/managed-by: kustomize name: controller-manager-metrics-service namespace: system spec: ports: - name: https port: 8443 protocol: TCP targetPort: 8443 selector: control-plane: controller-manager",
"- --leader-elect + - --health-probe-bind-address=:8081",
"- path: /metrics - port: https + port: https # Ensure this is the name of the port that exposes HTTPS metrics tlsConfig: + # TODO(user): The option insecureSkipVerify: true is not recommended for production since it disables + # certificate verification. This poses a significant security risk by making the system vulnerable to + # man-in-the-middle attacks, where an attacker could intercept and manipulate the communication between + # Prometheus and the monitored services. This could lead to unauthorized access to sensitive metrics data, + # compromising the integrity and confidentiality of the information. + # Please use the following options for secure configurations: + # caFile: /etc/metrics-certs/ca.crt + # certFile: /etc/metrics-certs/tls.crt + # keyFile: /etc/metrics-certs/tls.key insecureSkipVerify: true",
"- leader_election_role_binding.yaml - # Comment the following 4 lines if you want to disable - # the auth proxy (https://github.com/brancz/kube-rbac-proxy) - # which protects your /metrics endpoint. - - auth_proxy_service.yaml - - auth_proxy_role.yaml - - auth_proxy_role_binding.yaml - - auth_proxy_client_clusterrole.yaml + # The following RBAC configurations are used to protect + # the metrics endpoint with authn/authz. These configurations + # ensure that only authorized users and service accounts + # can access the metrics endpoint. Comment the following + # permissions if you want to disable this protection. + # More info: https://book.kubebuilder.io/reference/metrics.html + - metrics_auth_role.yaml + - metrics_auth_role_binding.yaml + - metrics_reader_role.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-auth-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: metrics-auth-role subjects: - kind: ServiceAccount name: controller-manager namespace: system",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: metrics-reader rules: - nonResourceURLs: - \"/metrics\" verbs: - get",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"domain: example.com layout: - ansible.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: memcached-operator version: \"3\"",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211",
"--- defaults file for Memcached size: 1",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3",
"env: - name: HTTP_PROXY value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}' - name: http_proxy value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}'",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.38.0 1",
"FROM registry.redhat.io/openshift4/ose-ansible-operator:v4",
"- curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.3.0/kustomize_v5.3.0_USD(OS)_USD(ARCH).tar.gz | + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.4.2/kustomize_v5.4.2_USD(OS)_USD(ARCH).tar.gz | \\",
"[PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. #- ../prometheus + # [METRICS] Expose the controller manager metrics service. + - metrics_service.yaml + # Uncomment the patches line if you enable Metrics, and/or are using webhooks and cert-manager patches: - # Protect the /metrics endpoint by putting it behind auth. - # If you want your controller-manager to expose the /metrics - # endpoint w/o any authn/z, please comment the following line. - - path: manager_auth_proxy_patch.yaml + # [METRICS] The following patch will enable the metrics endpoint using HTTPS and the port :8443. + # More info: https://book.kubebuilder.io/reference/metrics + - path: manager_metrics_patch.yaml + target: + kind: Deployment",
"This patch adds the args to allow exposing the metrics endpoint using HTTPS - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-bind-address=:8443 This patch adds the args to allow securing the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-secure This patch adds the args to allow RBAC-based authn/authz the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-require-rbac",
"apiVersion: v1 kind: Service metadata: labels: control-plane: controller-manager app.kubernetes.io/name: <operator-name> app.kubernetes.io/managed-by: kustomize name: controller-manager-metrics-service namespace: system spec: ports: - name: https port: 8443 protocol: TCP targetPort: 8443 selector: control-plane: controller-manager",
"- --leader-elect + - --health-probe-bind-address=:6789",
"- path: /metrics - port: https + port: https # Ensure this is the name of the port that exposes HTTPS metrics tlsConfig: + # TODO(user): The option insecureSkipVerify: true is not recommended for production since it disables + # certificate verification. This poses a significant security risk by making the system vulnerable to + # man-in-the-middle attacks, where an attacker could intercept and manipulate the communication between + # Prometheus and the monitored services. This could lead to unauthorized access to sensitive metrics data, + # compromising the integrity and confidentiality of the information. + # Please use the following options for secure configurations: + # caFile: /etc/metrics-certs/ca.crt + # certFile: /etc/metrics-certs/tls.crt + # keyFile: /etc/metrics-certs/tls.key insecureSkipVerify: true",
"- leader_election_role_binding.yaml - # Comment the following 4 lines if you want to disable - # the auth proxy (https://github.com/brancz/kube-rbac-proxy) - # which protects your /metrics endpoint. - - auth_proxy_service.yaml - - auth_proxy_role.yaml - - auth_proxy_role_binding.yaml - - auth_proxy_client_clusterrole.yaml + # The following RBAC configurations are used to protect + # the metrics endpoint with authn/authz. These configurations + # ensure that only authorized users and service accounts + # can access the metrics endpoint. Comment the following + # permissions if you want to disable this protection. + # More info: https://book.kubebuilder.io/reference/metrics.html + - metrics_auth_role.yaml + - metrics_auth_role_binding.yaml + - metrics_reader_role.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-auth-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: metrics-auth-role subjects: - kind: ServiceAccount name: controller-manager namespace: system",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: metrics-reader rules: - nonResourceURLs: - \"/metrics\" verbs: - get",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false",
"- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False",
"apiVersion: \"app.example.com/v1alpha1\" kind: \"Database\" metadata: name: \"example\" spec: message: \"Hello world 2\" newParameter: \"newParam\"",
"{ \"meta\": { \"name\": \"<cr_name>\", \"namespace\": \"<cr_namespace>\", }, \"message\": \"Hello world 2\", \"new_parameter\": \"newParam\", \"_app_example_com_database\": { <full_crd> }, }",
"--- - debug: msg: \"name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}\"",
"sudo dnf install ansible",
"pip install kubernetes",
"ansible-galaxy collection install community.kubernetes",
"ansible-galaxy collection install -r requirements.yml",
"--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: <operator_namespace> 1 state: \"{{ state }}\" ignore_errors: true 2",
"--- state: present",
"--- - hosts: localhost roles: - <kind>",
"ansible-playbook playbook.yml",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"NAME DATA AGE example-config 0 2m1s",
"ansible-playbook playbook.yml --extra-vars state=absent",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"make install",
"/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"make run",
"/home/user/memcached-operator/bin/ansible-operator run {\"level\":\"info\",\"ts\":1612739145.2871568,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612739148.347306,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612739148.3488882,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612739148.3490262,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612739148.3490646,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612739148.350217,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612739148.3506632,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612739148.350784,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612739148.5511978,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612739148.5512562,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: \"<kind>-sample\"",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmaps",
"NAME STATUS AGE example-config Active 3s",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmap",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc logs deployment/<project_name>-controller-manager -c manager \\ 1 -n <namespace> 2",
"{\"level\":\"info\",\"ts\":1612732105.0579333,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612732105.0587437,\"logger\":\"cmd\",\"msg\":\"WATCH_NAMESPACE environment variable not set. Watching all namespaces.\",\"Namespace\":\"\"} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {\"level\":\"info\",\"ts\":1612732107.768025,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\"127.0.0.1:8080\"} {\"level\":\"info\",\"ts\":1612732107.768796,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612732107.7688773,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612732107.7688901,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612732107.770032,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.770202,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.7850506,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612732107.8853772,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612732107.8854098,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4}",
"containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: \"True\"",
"apiVersion: \"cache.example.com/v1alpha1\" kind: \"Memcached\" metadata: name: \"example-memcached\" annotations: \"ansible.sdk.operatorframework.io/verbosity\": \"4\" spec: size: 4",
"status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: \"True\" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: \"True\" type: Running",
"- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false",
"- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: \"{{ ansible_operator_meta.name }}\" namespace: \"{{ ansible_operator_meta.namespace }}\" status: test: data",
"collections: - operator_sdk.util",
"k8s_status: status: key1: value1",
"mkdir -p USDHOME/projects/nginx-operator",
"cd USDHOME/projects/nginx-operator",
"operator-sdk init --plugins=helm --domain=example.com --group=demo --version=v1 --kind=Nginx",
"operator-sdk init --plugins helm --help",
"domain: example.com layout: - helm.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: nginx-operator resources: - api: crdVersion: v1 namespaced: true domain: example.com group: demo kind: Nginx version: v1 version: \"3\"",
"Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx +kubebuilder:scaffold:watch",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080",
"- group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: USDHTTP_PROXY",
"proxy: http: \"\" https: \"\" no_proxy: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: \"{{ .Values.proxy.http }}\"",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project nginx-operator-system",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get nginx/nginx-sample -o yaml",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7",
"oc patch nginx nginx-sample -p '{\"spec\":{\"replicaCount\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m",
"oc delete -f config/samples/demo_v1_nginx.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.38.0 1",
"FROM registry.redhat.io/openshift4/ose-helm-rhel9-operator:v4",
"- curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.3.0/kustomize_v5.3.0_USD(OS)_USD(ARCH).tar.gz | + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.4.2/kustomize_v5.4.2_USD(OS)_USD(ARCH).tar.gz | \\",
"[PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. #- ../prometheus + # [METRICS] Expose the controller manager metrics service. + - metrics_service.yaml + # Uncomment the patches line if you enable Metrics, and/or are using webhooks and cert-manager patches: - # Protect the /metrics endpoint by putting it behind auth. - # If you want your controller-manager to expose the /metrics - # endpoint w/o any authn/z, please comment the following line. - - path: manager_auth_proxy_patch.yaml + # [METRICS] The following patch will enable the metrics endpoint using HTTPS and the port :8443. + # More info: https://book.kubebuilder.io/reference/metrics + - path: manager_metrics_patch.yaml + target: + kind: Deployment",
"This patch adds the args to allow exposing the metrics endpoint using HTTPS - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-bind-address=:8443 This patch adds the args to allow securing the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-secure This patch adds the args to allow RBAC-based authn/authz the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-require-rbac",
"apiVersion: v1 kind: Service metadata: labels: control-plane: controller-manager app.kubernetes.io/name: <operator-name> app.kubernetes.io/managed-by: kustomize name: controller-manager-metrics-service namespace: system spec: ports: - name: https port: 8443 protocol: TCP targetPort: 8443 selector: control-plane: controller-manager",
"- --leader-elect + - --health-probe-bind-address=:8081",
"- path: /metrics - port: https + port: https # Ensure this is the name of the port that exposes HTTPS metrics tlsConfig: + # TODO(user): The option insecureSkipVerify: true is not recommended for production since it disables + # certificate verification. This poses a significant security risk by making the system vulnerable to + # man-in-the-middle attacks, where an attacker could intercept and manipulate the communication between + # Prometheus and the monitored services. This could lead to unauthorized access to sensitive metrics data, + # compromising the integrity and confidentiality of the information. + # Please use the following options for secure configurations: + # caFile: /etc/metrics-certs/ca.crt + # certFile: /etc/metrics-certs/tls.crt + # keyFile: /etc/metrics-certs/tls.key insecureSkipVerify: true",
"- leader_election_role_binding.yaml - # Comment the following 4 lines if you want to disable - # the auth proxy (https://github.com/brancz/kube-rbac-proxy) - # which protects your /metrics endpoint. - - auth_proxy_service.yaml - - auth_proxy_role.yaml - - auth_proxy_role_binding.yaml - - auth_proxy_client_clusterrole.yaml + # The following RBAC configurations are used to protect + # the metrics endpoint with authn/authz. These configurations + # ensure that only authorized users and service accounts + # can access the metrics endpoint. Comment the following + # permissions if you want to disable this protection. + # More info: https://book.kubebuilder.io/reference/metrics.html + - metrics_auth_role.yaml + - metrics_auth_role_binding.yaml + - metrics_reader_role.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-auth-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: metrics-auth-role subjects: - kind: ServiceAccount name: controller-manager namespace: system",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: metrics-reader rules: - nonResourceURLs: - \"/metrics\" verbs: - get",
"apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2",
"{{ .Values.replicaCount }}",
"oc get Tomcats --all-namespaces",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: features.operators.openshift.io/disconnected: \"true\" features.operators.openshift.io/fips-compliant: \"false\" features.operators.openshift.io/proxy-aware: \"false\" features.operators.openshift.io/tls-profiles: \"false\" features.operators.openshift.io/token-auth-aws: \"false\" features.operators.openshift.io/token-auth-azure: \"false\" features.operators.openshift.io/token-auth-gcp: \"false\"",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"3Scale Commercial License\", \"Red Hat Managed Integration\"]'",
"spec: spec: containers: - command: - /manager env: - name: <related_image_environment_variable> 1 value: \"<related_image_reference_with_tag>\" 2",
"// deploymentForMemcached returns a memcached Deployment object Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: \"memcached:1.4.36-alpine\", 1 + Image: os.Getenv(\"<related_image_environment_variable>\"), 2 Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{",
"spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: \"docker.io/memcached:1.4.36-alpine\" 1 + image: \"{{ lookup('env', '<related_image_environment_variable>') }}\" 2 ports: - containerPort: 11211",
"- group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues: 1 relatedImage: USD{<related_image_environment_variable>} 2",
"relatedImage: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.pullPolicy }} env: 1 - name: related_image 2 value: \"{{ .Values.relatedImage }}\" 3",
"BUNDLE_GEN_FLAGS ?= -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq (USD(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif - USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) 1 + USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle USD(BUNDLE_GEN_FLAGS) 2",
"make bundle USE_IMAGE_DIGESTS=true",
"metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\"]'",
"labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2",
"labels: operatorframework.io/os.linux: supported",
"labels: operatorframework.io/arch.amd64: supported",
"labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2",
"metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1",
"metadata: annotations: operatorframework.io/suggested-namespace-template: 1 { \"apiVersion\": \"v1\", \"kind\": \"Namespace\", \"metadata\": { \"name\": \"vertical-pod-autoscaler-suggested-template\", \"annotations\": { \"openshift.io/node-selector\": \"\" } } }",
"module github.com/example-inc/memcached-operator go 1.19 require ( k8s.io/apimachinery v0.26.0 k8s.io/client-go v0.26.0 sigs.k8s.io/controller-runtime v0.14.1 operator-framework/operator-lib v0.11.0 )",
"import ( apiv1 \"github.com/operator-framework/api/pkg/operators/v1\" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, \"apiv1.OperatorUpgradeable\") } cond, err := NewUpgradeable(cl);",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5",
"- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.",
"required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.",
"versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true",
"customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster",
"versions: - name: v1alpha1 served: false 1 storage: true",
"versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2",
"versions: - name: v1beta1 served: true storage: true",
"metadata: annotations: alm-examples: >- [{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdCluster\",\"metadata\":{\"name\":\"example\",\"namespace\":\"<operator_namespace>\"},\"spec\":{\"size\":3,\"version\":\"3.2.13\"}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdRestore\",\"metadata\":{\"name\":\"example-etcd-cluster\"},\"spec\":{\"etcdCluster\":{\"name\":\"example-etcd-cluster\"},\"backupStorageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdBackup\",\"metadata\":{\"name\":\"example-etcd-cluster-backup\"},\"spec\":{\"etcdEndpoints\":[\"<etcd-cluster-endpoints>\"],\"storageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}}]",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '[\"my.internal.crd1.io\",\"my.internal.crd2.io\"]' 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { \"apiVersion\": \"ocs.openshift.io/v1\", \"kind\": \"StorageCluster\", \"metadata\": { \"name\": \"example-storagecluster\" }, \"spec\": { \"manageNodes\": false, \"monPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"10Gi\" } }, \"storageClassName\": \"gp2\" } }, \"storageDeviceSets\": [ { \"count\": 3, \"dataPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"1Ti\" } }, \"storageClassName\": \"gp2\", \"volumeMode\": \"Block\" } }, \"name\": \"example-deviceset\", \"placement\": {}, \"portable\": true, \"resources\": {} } ] } }",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make bundle-build bundle-push catalog-build catalog-push BUNDLE_IMG=<bundle_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>",
"IMAGE_TAG_BASE=quay.io/example/my-operator",
"make bundle-build bundle-push catalog-build catalog-push",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: <operator_namespace> spec: displayName: My Test publisher: Company sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: quay.io/example/memcached-catalog:v0.0.1 2 updateStrategy: registryPoll: interval: 10m",
"oc get catalogsource",
"NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: <operator_namespace> spec: targetNamespaces: - <operator_namespace>",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: <catalog_namespace> spec: channel: \"alpha\" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: <operator_namespace> startingCSV: memcached-operator.v0.0.1",
"oc get og",
"NAME AGE my-test 4h40m",
"oc get csv",
"NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded",
"oc get pods",
"NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m",
"operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1",
"INFO[0006] Creating a File-Based Catalog of the bundle \"quay.io/demo/memcached-operator:v0.0.1\" INFO[0008] Generated a valid File-Based Catalog INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1 INFO[0012] Created CatalogSource: memcached-operator-catalog INFO[0012] OperatorGroup \"operator-sdk-og\" created INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub INFO[0015] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to reach 'Succeeded' phase INFO[0015] Waiting for ClusterServiceVersion \"\"my-project/memcached-operator.v0.0.1\" to appear INFO[0026] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Pending INFO[0028] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Installing INFO[0059] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Succeeded INFO[0059] OLM has successfully installed \"memcached-operator.v0.0.1\"",
"operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2",
"INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0008] Generated a valid Upgraded File-Based Catalog INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name \"quay-io-demo-memcached-operator-v0-0-1\" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" to reach 'Succeeded' phase INFO[0019] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Pending INFO[0042] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: InstallReady INFO[0043] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Installing INFO[0044] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Succeeded INFO[0044] Successfully upgraded to \"memcached-operator.v0.0.2\"",
"operator-sdk cleanup memcached-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<cluster_version>\"}]' 1",
"com.redhat.openshift.versions: \"v4.7-v4.9\" 1",
"LABEL com.redhat.openshift.versions=\"<versions>\" 1",
"spec: securityContext: seccompProfile: type: RuntimeDefault 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"spec: securityContext: 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"containers: - name: my-container securityContext: allowPrivilegeEscalation: false capabilities: add: - \"NET_ADMIN\"",
"install: spec: clusterPermissions: - rules: - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use serviceAccountName: default",
"spec: apiservicedefinitions:{} description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.",
"operator-sdk scorecard <bundle_dir_or_image> [flags]",
"operator-sdk scorecard -h",
"./bundle └── tests └── scorecard └── config.yaml",
"kind: Configuration apiversion: scorecard.operatorframework.io/v1alpha3 metadata: name: config stages: - parallel: true tests: - image: quay.io/operator-framework/scorecard-test:v1.38.0 entrypoint: - scorecard-test - basic-check-spec labels: suite: basic test: basic-check-spec-test - image: quay.io/operator-framework/scorecard-test:v1.38.0 entrypoint: - scorecard-test - olm-bundle-validation labels: suite: olm test: olm-bundle-validation-test",
"make bundle",
"operator-sdk scorecard <bundle_dir_or_image>",
"{ \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"kind\": \"TestList\", \"items\": [ { \"kind\": \"Test\", \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"spec\": { \"image\": \"quay.io/operator-framework/scorecard-test:v1.38.0\", \"entrypoint\": [ \"scorecard-test\", \"olm-bundle-validation\" ], \"labels\": { \"suite\": \"olm\", \"test\": \"olm-bundle-validation-test\" } }, \"status\": { \"results\": [ { \"name\": \"olm-bundle-validation\", \"log\": \"time=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found metadata directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Getting mediaType info from manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Found annotations file\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Could not find optional dependencies file\\\" name=bundle-test\\n\", \"state\": \"pass\" } ] } } ] }",
"-------------------------------------------------------------------------------- Image: quay.io/operator-framework/scorecard-test:v1.38.0 Entrypoint: [scorecard-test olm-bundle-validation] Labels: \"suite\":\"olm\" \"test\":\"olm-bundle-validation-test\" Results: Name: olm-bundle-validation State: pass Log: time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found metadata directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Found annotations file\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'",
"apiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true 1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.38.0 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.38.0 labels: suite: olm test: olm-bundle-validation-test",
"// Copyright 2020 The Operator-SDK Authors // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( \"encoding/json\" \"fmt\" \"log\" \"os\" scapiv1alpha3 \"github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3\" apimanifests \"github.com/operator-framework/api/pkg/manifests\" ) // This is the custom scorecard test example binary // As with the Redhat scorecard test image, the bundle that is under // test is expected to be mounted so that tests can inspect the // bundle contents as part of their test implementations. // The actual test is to be run is named and that name is passed // as an argument to this binary. This argument mechanism allows // this binary to run various tests all from within a single // test image. const PodBundleRoot = \"/bundle\" func main() { entrypoint := os.Args[1:] if len(entrypoint) == 0 { log.Fatal(\"Test name argument is required\") } // Read the pod's untar'd bundle from a well-known path. cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot) if err != nil { log.Fatal(err.Error()) } var result scapiv1alpha3.TestStatus // Names of the custom tests which would be passed in the // `operator-sdk` command. switch entrypoint[0] { case CustomTest1Name: result = CustomTest1(cfg) case CustomTest2Name: result = CustomTest2(cfg) default: result = printValidTests() } // Convert scapiv1alpha3.TestResult to json. prettyJSON, err := json.MarshalIndent(result, \"\", \" \") if err != nil { log.Fatal(\"Failed to generate json\", err) } fmt.Printf(\"%s\\n\", string(prettyJSON)) } // printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are. func printValidTests() scapiv1alpha3.TestStatus { result := scapiv1alpha3.TestResult{} result.State = scapiv1alpha3.FailState result.Errors = make([]string, 0) result.Suggestions = make([]string, 0) str := fmt.Sprintf(\"Valid tests for this image include: %s %s\", CustomTest1Name, CustomTest2Name) result.Errors = append(result.Errors, str) return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{result}, } } const ( CustomTest1Name = \"customtest1\" CustomTest2Name = \"customtest2\" ) // Define any operator specific custom tests here. // CustomTest1 and CustomTest2 are example test functions. Relevant operator specific // test logic is to be implemented in similarly. func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest1Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest2Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus { return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{r}, } }",
"operator-sdk bundle validate <bundle_dir_or_image> <flags>",
"./bundle ├── manifests │ ├── cache.my.domain_memcacheds.yaml │ └── memcached-operator.clusterserviceversion.yaml └── metadata └── annotations.yaml",
"INFO[0000] All validation tests have completed successfully",
"ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD \"cache.example.com/v1alpha1, Kind=Memcached\" is present in bundle \"\" but not defined in CSV",
"WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found INFO[0000] All validation tests have completed successfully",
"operator-sdk bundle validate -h",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"operator-sdk bundle validate ./bundle",
"operator-sdk bundle validate <bundle_registry>/<bundle_image_name>:<tag>",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD \"k8sevents.k8s.k8sevent.com\" has an empty description",
"// Simple query nn := types.NamespacedName{ Name: \"cluster\", } infraConfig := &configv1.Infrastructure{} err = crClient.Get(context.Background(), nn, infraConfig) if err != nil { return err } fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.InfrastructureTopology)",
"operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second) infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister() infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), \"cluster\", metav1.GetOptions{}) if err != nil { return err } // fmt.Printf(\"%v\\n\", infraConfig) fmt.Printf(\"%v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"%v\\n\", infraConfig.Status.InfrastructureTopology)",
"import ( \"github.com/operator-framework/operator-sdk/pkg/leader\" ) func main() { err = leader.Become(context.TODO(), \"memcached-operator-lock\") if err != nil { log.Error(err, \"Failed to retry for leader lock\") os.Exit(1) } }",
"import ( \"sigs.k8s.io/controller-runtime/pkg/manager\" ) func main() { opts := manager.Options{ LeaderElection: true, LeaderElectionID: \"memcached-operator-lock\" } mgr, err := manager.New(cfg, opts) }",
"cfg = Config{ log: logf.Log.WithName(\"prune\"), DryRun: false, Clientset: client, LabelSelector: \"app=<operator_name>\", Resources: []schema.GroupVersionKind{ {Group: \"\", Version: \"\", Kind: PodKind}, }, Namespaces: []string{\"<operator_namespace>\"}, Strategy: StrategyConfig{ Mode: MaxCountStrategy, MaxCountSetting: 1, }, PreDeleteHook: myhook, }",
"err := cfg.Execute(ctx)",
"packagemanifests/ └── etcd ├── 0.0.1 │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml ├── 0.0.2 │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ └── etcdrestore.crd.yaml └── etcd.package.yaml",
"bundle/ ├── bundle-0.0.1 │ ├── bundle.Dockerfile │ ├── manifests │ │ ├── etcdcluster.crd.yaml │ │ ├── etcdoperator.clusterserviceversion.yaml │ ├── metadata │ │ └── annotations.yaml │ └── tests │ └── scorecard │ └── config.yaml └── bundle-0.0.2 ├── bundle.Dockerfile ├── manifests │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ ├── etcdrestore.crd.yaml ├── metadata │ └── annotations.yaml └── tests └── scorecard └── config.yaml",
"operator-sdk pkgman-to-bundle <package_manifests_dir> \\ 1 [--output-dir <directory>] \\ 2 --image-tag-base <image_name_base> 3",
"operator-sdk run bundle <bundle_image_name>:<tag>",
"INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup \"operator-sdk-og\" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to appear INFO[0048] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Pending INFO[0049] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Installing INFO[0064] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Succeeded INFO[0065] OLM has successfully installed \"etcdoperator.v0.9.4\"",
"operator-sdk <command> [<subcommand>] [<argument>] [<flags>]",
"operator-sdk completion bash",
"bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh",
"operator-sdk --version operator-sdk version 0.1.0",
"mkdir -p USDGOPATH/src/github.com/example-inc/ cd USDGOPATH/src/github.com/example-inc/ mv memcached-operator old-memcached-operator operator-sdk new memcached-operator --skip-git-init ls memcached-operator old-memcached-operator",
"cp -rf old-memcached-operator/.git memcached-operator/.git",
"cd memcached-operator operator-sdk add api --api-version=cache.example.com/v1alpha1 --kind=Memcached tree pkg/apis pkg/apis/ ├── addtoscheme_cache_v1alpha1.go ├── apis.go └── cache └── v1alpha1 ├── doc.go ├── memcached_types.go ├── register.go └── zz_generated.deepcopy.go",
"func init() { SchemeBuilder.Register(&Memcached{}, &MemcachedList{})",
"sdk.Watch(\"cache.example.com/v1alpha1\", \"Memcached\", \"default\", time.Duration(5)*time.Second)",
"operator-sdk add controller --api-version=cache.example.com/v1alpha1 --kind=Memcached tree pkg/controller pkg/controller/ ├── add_memcached.go ├── controller.go └── memcached └── memcached_controller.go",
"import ( cachev1alpha1 \"github.com/example-inc/memcached-operator/pkg/apis/cache/v1alpha1\" ) func add(mgr manager.Manager, r reconcile.Reconciler) error { c, err := controller.New(\"memcached-controller\", mgr, controller.Options{Reconciler: r}) // Watch for changes to the primary resource Memcached err = c.Watch(&source.Kind{Type: &cachev1alpha1.Memcached{}}, &handler.EnqueueRequestForObject{}) // Watch for changes to the secondary resource pods and enqueue reconcile requests for the owner Memcached err = c.Watch(&source.Kind{Type: &corev1.Pod{}}, &handler.EnqueueRequestForOwner{ IsController: true, OwnerType: &cachev1alpha1.Memcached{}, }) }",
"// Watch for changes to the primary resource Memcached err = c.Watch(&source.Kind{Type: &cachev1alpha1.Memcached{}}, &handler.EnqueueRequestForObject{}) // Watch for changes to the secondary resource AppService and enqueue reconcile requests for the owner Memcached err = c.Watch(&source.Kind{Type: &appv1alpha1.AppService{}}, &handler.EnqueueRequestForOwner{ IsController: true, OwnerType: &cachev1alpha1.Memcached{}, })",
"operator-sdk add controller --api-version=app.example.com/v1alpha1 --kind=AppService",
"// Watch for changes to the primary resource AppService err = c.Watch(&source.Kind{Type: &appv1alpha1.AppService{}}, &handler.EnqueueRequestForObject{})",
"func (r *ReconcileMemcached) Reconcile(request reconcile.Request) (reconcile.Result, error)",
"func (h *Handler) Handle(ctx context.Context, event sdk.Event) error",
"import ( apierrors \"k8s.io/apimachinery/pkg/api/errors\" cachev1alpha1 \"github.com/example-inc/memcached-operator/pkg/apis/cache/v1alpha1\" ) func (r *ReconcileMemcached) Reconcile(request reconcile.Request) (reconcile.Result, error) { // Fetch the Memcached instance instance := &cachev1alpha1.Memcached{} err := r.client.Get(context.TODO() request.NamespacedName, instance) if err != nil { if apierrors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. // Return and don't requeue return reconcile.Result{}, nil } // Error reading the object - requeue the request. return reconcile.Result{}, err } // Rest of your reconcile code goes here. }",
"reconcilePeriod := 30 * time.Second reconcileResult := reconcile.Result{RequeueAfter: reconcilePeriod} // Update the status err := r.client.Update(context.TODO(), memcached) if err != nil { log.Printf(\"failed to update memcached status: %v\", err) return reconcileResult, err } return reconcileResult, nil",
"// Create dep := &appsv1.Deployment{...} err := sdk.Create(dep) // v0.0.1 err := r.client.Create(context.TODO(), dep) // Update err := sdk.Update(dep) // v0.0.1 err := r.client.Update(context.TODO(), dep) // Delete err := sdk.Delete(dep) // v0.0.1 err := r.client.Delete(context.TODO(), dep) // List podList := &corev1.PodList{} labelSelector := labels.SelectorFromSet(labelsForMemcached(memcached.Name)) listOps := &metav1.ListOptions{LabelSelector: labelSelector} err := sdk.List(memcached.Namespace, podList, sdk.WithListOptions(listOps)) // v0.1.0 listOps := &client.ListOptions{Namespace: memcached.Namespace, LabelSelector: labelSelector} err := r.client.List(context.TODO(), listOps, podList) // Get dep := &appsv1.Deployment{APIVersion: \"apps/v1\", Kind: \"Deployment\", Name: name, Namespace: namespace} err := sdk.Get(dep) // v0.1.0 dep := &appsv1.Deployment{} err = r.client.Get(context.TODO(), types.NamespacedName{Name: name, Namespace: namespace}, dep)",
"// newReconciler returns a new reconcile.Reconciler func newReconciler(mgr manager.Manager) reconcile.Reconciler { return &ReconcileMemcached{client: mgr.GetClient(), scheme: mgr.GetScheme(), foo: \"bar\"} } // ReconcileMemcached reconciles a Memcached object type ReconcileMemcached struct { client client.Client scheme *runtime.Scheme // Other fields foo string }"
]
| https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/operators/developing-operators |
Chapter 3. ClusterResourceQuota [quota.openshift.io/v1] | Chapter 3. ClusterResourceQuota [quota.openshift.io/v1] Description ClusterResourceQuota mirrors ResourceQuota at a cluster scope. This object is easily convertible to synthetic ResourceQuota object to allow quota evaluation re-use. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Spec defines the desired quota status object Status defines the actual enforced quota and its current usage 3.1.1. .spec Description Spec defines the desired quota Type object Required quota selector Property Type Description quota object Quota defines the desired quota selector object Selector is the selector used to match projects. It should only select active projects on the scale of dozens (though it can select many more less active projects). These projects will contend on object creation through this resource. 3.1.2. .spec.quota Description Quota defines the desired quota Type object Property Type Description hard integer-or-string hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ scopeSelector object scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. scopes array (string) A collection of filters that must match each object tracked by a quota. If not specified, the quota matches all objects. 3.1.3. .spec.quota.scopeSelector Description scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. Type object Property Type Description matchExpressions array A list of scope selector requirements by scope of the resources. matchExpressions[] object A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values. 3.1.4. .spec.quota.scopeSelector.matchExpressions Description A list of scope selector requirements by scope of the resources. Type array 3.1.5. .spec.quota.scopeSelector.matchExpressions[] Description A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values. Type object Required operator scopeName Property Type Description operator string Represents a scope's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. scopeName string The name of the scope that the selector applies to. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.6. .spec.selector Description Selector is the selector used to match projects. It should only select active projects on the scale of dozens (though it can select many more less active projects). These projects will contend on object creation through this resource. Type object Property Type Description annotations undefined (string) AnnotationSelector is used to select projects by annotation. labels `` LabelSelector is used to select projects by label. 3.1.7. .status Description Status defines the actual enforced quota and its current usage Type object Required total Property Type Description namespaces `` Namespaces slices the usage by project. This division allows for quick resolution of deletion reconciliation inside of a single project without requiring a recalculation across all projects. This can be used to pull the deltas for a given project. total object Total defines the actual enforced quota and its current usage across all projects 3.1.8. .status.total Description Total defines the actual enforced quota and its current usage across all projects Type object Property Type Description hard integer-or-string Hard is the set of enforced hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ used integer-or-string Used is the current observed total usage of the resource in the namespace. 3.2. API endpoints The following API endpoints are available: /apis/quota.openshift.io/v1/clusterresourcequotas DELETE : delete collection of ClusterResourceQuota GET : list objects of kind ClusterResourceQuota POST : create a ClusterResourceQuota /apis/quota.openshift.io/v1/watch/clusterresourcequotas GET : watch individual changes to a list of ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. /apis/quota.openshift.io/v1/clusterresourcequotas/{name} DELETE : delete a ClusterResourceQuota GET : read the specified ClusterResourceQuota PATCH : partially update the specified ClusterResourceQuota PUT : replace the specified ClusterResourceQuota /apis/quota.openshift.io/v1/watch/clusterresourcequotas/{name} GET : watch changes to an object of kind ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/quota.openshift.io/v1/clusterresourcequotas/{name}/status GET : read status of the specified ClusterResourceQuota PATCH : partially update status of the specified ClusterResourceQuota PUT : replace status of the specified ClusterResourceQuota 3.2.1. /apis/quota.openshift.io/v1/clusterresourcequotas HTTP method DELETE Description delete collection of ClusterResourceQuota Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterResourceQuota Table 3.2. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuotaList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterResourceQuota Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.4. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.5. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 202 - Accepted ClusterResourceQuota schema 401 - Unauthorized Empty 3.2.2. /apis/quota.openshift.io/v1/watch/clusterresourcequotas HTTP method GET Description watch individual changes to a list of ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/quota.openshift.io/v1/clusterresourcequotas/{name} Table 3.7. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota HTTP method DELETE Description delete a ClusterResourceQuota Table 3.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterResourceQuota Table 3.10. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterResourceQuota Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterResourceQuota Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.14. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 401 - Unauthorized Empty 3.2.4. /apis/quota.openshift.io/v1/watch/clusterresourcequotas/{name} Table 3.16. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota HTTP method GET Description watch changes to an object of kind ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/quota.openshift.io/v1/clusterresourcequotas/{name}/status Table 3.18. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota HTTP method GET Description read status of the specified ClusterResourceQuota Table 3.19. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterResourceQuota Table 3.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.21. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterResourceQuota Table 3.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.23. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.24. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/schedule_and_quota_apis/clusterresourcequota-quota-openshift-io-v1 |
5.3.5. /proc/ide/ | 5.3.5. /proc/ide/ This directory contains information about IDE devices on the system. Each IDE channel is represented as a separate directory, such as /proc/ide/ide0 and /proc/ide/ide1 . In addition, a drivers file is available, providing the version number of the various drivers used on the IDE channels: Many chipsets also provide a file in this directory with additional data concerning the drives connected through the channels. For example, a generic Intel PIIX4 Ultra 33 chipset produces the /proc/ide/piix file which reveals whether DMA or UDMA is enabled for the devices on the IDE channels: Navigating into the directory for an IDE channel, such as ide0 , provides additional information. The channel file provides the channel number, while the model identifies the bus type for the channel (such as pci ). 5.3.5.1. Device Directories Within each IDE channel directory is a device directory. The name of the device directory corresponds to the drive letter in the /dev/ directory. For instance, the first IDE drive on ide0 would be hda . Note There is a symbolic link to each of these device directories in the /proc/ide/ directory. Each device directory contains a collection of information and statistics. The contents of these directories vary according to the type of device connected. Some of the more useful files common to many devices include: cache - The device cache. capacity - The capacity of the device, in 512 byte blocks. driver - The driver and version used to control the device. geometry - The physical and logical geometry of the device. media - The type of device, such as a disk . model - The model name or number of the device. settings - A collection of current device parameters. This file usually contains quite a bit of useful, technical information. A sample settings file for a standard IDE hard disk looks similar to the following: | [
"ide-floppy version 0.99.newide ide-cdrom version 4.61 ide-disk version 1.18",
"Intel PIIX4 Ultra 33 Chipset. ------------- Primary Channel ---------------- Secondary Channel ------------- enabled enabled ------------- drive0 --------- drive1 -------- drive0 ---------- drive1 ------ DMA enabled: yes no yes no UDMA enabled: yes no no no UDMA enabled: 2 X X X UDMA DMA PIO",
"name value min max mode ---- ----- --- --- ---- acoustic 0 0 254 rw address 0 0 2 rw bios_cyl 38752 0 65535 rw bios_head 16 0 255 rw bios_sect 63 0 63 rw bswap 0 0 1 r current_speed 68 0 70 rw failures 0 0 65535 rw init_speed 68 0 70 rw io_32bit 0 0 3 rw keepsettings 0 0 1 rw lun 0 0 7 rw max_failures 1 0 65535 rw multcount 16 0 16 rw nice1 1 0 1 rw nowerr 0 0 1 rw number 0 0 3 rw pio_mode write-only 0 255 w unmaskirq 0 0 1 rw using_dma 1 0 1 rw wcache 1 0 1 rw"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-dir-ide |
Chapter 4. 3scale API Management automation using webhooks | Chapter 4. 3scale API Management automation using webhooks Webhooks is a feature that facilitates automation, and is also used to integrate other systems based on events that occur in 3scale. When specified events happen within the 3scale system, your applications will be notified with a webhook message. As an example, by configuring webhooks, you can use the data from a new account signup to populate your Developer Portal. 4.1. Overview of webhooks A webhook is a custom HTTP callback triggered by an event selected from the available ones in the Webhooks configuration window. When one of these events occurs, the 3scale system makes an HTTP or HTTPS request to the URL address specified in the webhooks section. With webhooks, you can configure the listener to invoke some desired behavior such as event tracking. The format of the webhook is always the same. It makes a post to the endpoint with an XML document of the following structure: <?xml version="1.0" encoding="UTF-8"?> <event> <type>application</type> <action>updated</action> <object> THE APPLICATION OBJECT AS WOULD BE RETURNED BY A GET ON THE ACCOUNT MANAGEMENT API </object> </event> Each element provides information: <type> Gives you the subject of the event such as application , account , and so on. <action> Specifies what has been done, by using values such as updated , created , deleted . <object> Constitutes the XML object itself in the same format that is returned by the Account Management API. To check this, you can use our interactive ActiveDocs . If you need to provide assurance that the webhook was issued by 3scale, expose an HTTPS webhook URL and add a custom parameter to your webhook declaration in 3scale. For example: https://your-webhook-endpoint?someSecretParameterName=someSecretParameterValue . Decide on the parameter name and value. Then, inside your webhook endpoint, check for the presence of this parameter value. 4.2. Configuring webhooks Procedure Select Account Settings from the Dashboard menu, then navigate to Integrate > Webhooks . Indicate the behavior for webhooks. There are two options: Webhooks enabled : Select this checkbox to enable or disable webhooks. Actions in the admin portal also trigger webhooks : Select this checkbox to trigger a webhook when an event happens. Consider the following: When making calls to the internal 3scale APIs configured with the triggering events, use an access token; not a provider key. If you leave this checkbox cleared, only actions in the Developer Portal trigger webhooks. Specify the URL address for notification of the selected events when they trigger. Select the events that will trigger the callback to the indicated URL address. Once you have configured the settings, click Update webhooks settings to save your changes. 4.3. Troubleshooting webhooks If you experience an outage for your listening endpoint, you can recover failed deliveries. 3scale will consider a webhook delivered if your endpoint responds with a 200 code. Otherwise, it will retry 5 times with a 60 seconds gap. After any recovery from an outage, or periodically, you should run a check and if applicable clean up the queue. You can find more information about the following methods in ActiveDocs: Webhooks list failed deliveries. Webhooks delete failed deliveries. Additional resources Adding ActiveDocs to 3scale | [
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <event> <type>application</type> <action>updated</action> <object> THE APPLICATION OBJECT AS WOULD BE RETURNED BY A GET ON THE ACCOUNT MANAGEMENT API </object> </event>"
]
| https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/operating_red_hat_3scale_api_management/threescale-automation-webhooks |
Chapter 52. CertificateAuthority schema reference | Chapter 52. CertificateAuthority schema reference Used in: KafkaSpec Configuration of how TLS certificates are used within the cluster. This applies to certificates used for both internal communication within the cluster and to certificates used for client access via Kafka.spec.kafka.listeners.tls . Property Description generateCertificateAuthority If true then Certificate Authority certificates will be generated automatically. Otherwise the user will need to provide a Secret with the CA certificate. Default is true. boolean generateSecretOwnerReference If true , the Cluster and Client CA Secrets are configured with the ownerReference set to the Kafka resource. If the Kafka resource is deleted when true , the CA Secrets are also deleted. If false , the ownerReference is disabled. If the Kafka resource is deleted when false , the CA Secrets are retained and available for reuse. Default is true . boolean validityDays The number of days generated certificates should be valid for. The default is 365. integer renewalDays The number of days in the certificate renewal period. This is the number of days before the a certificate expires during which renewal actions may be performed. When generateCertificateAuthority is true, this will cause the generation of a new certificate. When generateCertificateAuthority is true, this will cause extra logging at WARN level about the pending certificate expiry. Default is 30. integer certificateExpirationPolicy How should CA certificate expiration be handled when generateCertificateAuthority=true . The default is for a new CA certificate to be generated reusing the existing private key. string (one of [replace-key, renew-certificate]) | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-certificateauthority-reference |
Chapter 2. Multiple Hosts | Chapter 2. Multiple Hosts 2.1. Using Multiple Hosts JBoss Data Virtualization may be clustered over several servers utilizing failover and load balancing . The easiest way to enable these features is for the client to specify multiple hostname and port number combinations in the URL connection string as a comma separated list of host:port combinations: If you are connecting with the data source class, the setAlternateServers method can be used to specify the failover servers. The format is also a comma separated list of host:port combinations. The client randomly selects one of the JBoss Data Virtualization servers from the list and establishes a session with that server. If a connection cannot be established, then each of the remaining servers will be tried in random order. This allows for both connection time failover and random server selection load balancing. | [
"jdbc:teiid:<vdb-name>@mm://host1:31000,host1:31001,host2:31000;version=2"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/chap-Multiple_Hosts |
Chapter 5. Installing a cluster on IBM Power Virtual Server into an existing VPC | Chapter 5. Installing a cluster on IBM Power Virtual Server into an existing VPC In OpenShift Container Platform version 4.15, you can install a cluster into an existing Virtual Private Cloud (VPC) on IBM Cloud(R). The installation program provisions the rest of the required infrastructure, which you can then further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 5.2. About using a custom VPC In OpenShift Container Platform 4.15, you can deploy a cluster using an existing IBM(R) Virtual Private Cloud (VPC). Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 5.2.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 5.2.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 5.2.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 5.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.6. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 5.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 5.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 5.7.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 11 vpcSubnets: 12 - powervs-region-example-subnet-1 zone: powervs-zone serviceInstanceGUID: "powervs-region-service-instance-guid" credentialsMode: Manual publish: External 13 pullSecret: '{"auths": ...}' 14 fips: false sshKey: ssh-ed25519 AAAA... 15 1 5 If you do not provide these parameters and values, the installation program provides the default value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections currently define a single machine pool. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 4 8 The smtLevel specifies the level of SMT to set to the control plane and compute machines. The supported values are 1, 2, 4, 8, 'off' and 'on' . The default value is 8. The smtLevel 'off' sets SMT to off and smtlevel 'on' sets SMT to the default value 8 on the cluster nodes. Note When simultaneous multithreading (SMT), or hyperthreading is not enabled, one vCPU is equivalent to one physical core. When enabled, total vCPUs is computed as (Thread(s) per core * Core(s) per socket) * Socket(s). The smtLevel controls the threads per core. Lower SMT levels may require additional assigned cores when deploying the cluster nodes. You can do this by setting the 'processors' parameter in the install-config.yaml file to an appropriate value to meet the requirements for deploying OpenShift Container Platform successfully. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The cluster network plugin for installation. The supported value is OVNKubernetes . 11 Specify the name of an existing VPC. 12 Specify the name of the existing VPC subnet. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 13 Specify how to publish the user-facing endpoints of your cluster. 14 Required. The installation program prompts you for this value. 15 Provide the sshKey value that you use to access the machines in your cluster. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.8. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 5.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 5.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 5.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 5.13. steps Customize your cluster Optional: Opt out of remote health reporting | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IBMCLOUD_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 11 vpcSubnets: 12 - powervs-region-example-subnet-1 zone: powervs-zone serviceInstanceGUID: \"powervs-region-service-instance-guid\" credentialsMode: Manual publish: External 13 pullSecret: '{\"auths\": ...}' 14 fips: false sshKey: ssh-ed25519 AAAA... 15",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_power_virtual_server/installing-ibm-powervs-vpc |
Chapter 9. TokenRequest [authentication.k8s.io/v1] | Chapter 9. TokenRequest [authentication.k8s.io/v1] Description TokenRequest requests a token for a given service account. Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object TokenRequestSpec contains client provided parameters of a token request. status object TokenRequestStatus is the result of a token request. 9.1.1. .spec Description TokenRequestSpec contains client provided parameters of a token request. Type object Required audiences Property Type Description audiences array (string) Audiences are the intendend audiences of the token. A recipient of a token must identify themself with an identifier in the list of audiences of the token, and otherwise should reject the token. A token issued for multiple audiences may be used to authenticate against any of the audiences listed but implies a high degree of trust between the target audiences. boundObjectRef object BoundObjectReference is a reference to an object that a token is bound to. expirationSeconds integer ExpirationSeconds is the requested duration of validity of the request. The token issuer may return a token with a different validity duration so a client needs to check the 'expiration' field in a response. 9.1.2. .spec.boundObjectRef Description BoundObjectReference is a reference to an object that a token is bound to. Type object Property Type Description apiVersion string API version of the referent. kind string Kind of the referent. Valid kinds are 'Pod' and 'Secret'. name string Name of the referent. uid string UID of the referent. 9.1.3. .status Description TokenRequestStatus is the result of a token request. Type object Required token expirationTimestamp Property Type Description expirationTimestamp Time ExpirationTimestamp is the time of expiration of the returned token. token string Token is the opaque bearer token. 9.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/serviceaccounts/{name}/token POST : create token of a ServiceAccount 9.2.1. /api/v1/namespaces/{namespace}/serviceaccounts/{name}/token Table 9.1. Global path parameters Parameter Type Description name string name of the TokenRequest Table 9.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create token of a ServiceAccount Table 9.3. Body parameters Parameter Type Description body TokenRequest schema Table 9.4. HTTP responses HTTP code Reponse body 200 - OK TokenRequest schema 201 - Created TokenRequest schema 202 - Accepted TokenRequest schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authorization_apis/tokenrequest-authentication-k8s-io-v1 |
Chapter 11. Using report templates to monitor hosts | Chapter 11. Using report templates to monitor hosts You can use report templates to query Satellite data to obtain information about, for example, host status, registered hosts, applicable errata, applied errata, and user activity. You can use the report templates that ship with Satellite or write your own custom report templates to suit your requirements. The reporting engine uses the embedded Ruby (ERB) syntax. For more information about writing templates and ERB syntax, see Appendix B, Template writing reference . You can create a template, or clone a template and edit the clone. For help with the template syntax, click a template and click the Help tab. 11.1. Generating host monitoring reports To view the report templates in the Satellite web UI, navigate to Monitor > Reports > Report Templates . To schedule reports, configure a cron job or use the Satellite web UI. Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . For example, the Host - Installed Products template generates a report with installed product information along with other metrics, including system purpose attributes. To the right of the report template that you want to use, click Generate . Optional: To schedule a report, to the right of the Generate at field, click the icon to select the date and time you want to generate the report at. Optional: To send a report to an e-mail address, select the Send report via e-mail checkbox, and in the Deliver to e-mail addresses field, enter the required e-mail address. Optional: Apply search query filters. To view all available results, do not populate the filter field with any values. Click Submit . A CSV file that contains the report is downloaded. If you have selected the Send report via e-mail checkbox, the host monitoring report is sent to your e-mail address. CLI procedure List all available report templates: Generate a report: This command waits until the report fully generates before completing. If you want to generate the report as a background task, you can use the hammer report-template schedule command. Note If you want to generate the Subscription - General report, you have to use the Days from Now option to specify the latest expiration time of general subscriptions. Show all subscriptions Show all subscriptions that are going to expire within 60 days 11.2. Creating a report template In Satellite, you can create a report template and customize the template to suit your requirements. You can import existing report templates and further customize them with snippets and template macros. Report templates use Embedded Ruby (ERB) syntax. To view information about working with ERB syntax and macros, in the Satellite web UI, navigate to Monitor > Reports > Report Templates , and click Create Template , and then click the Help tab. When you create a report template in Satellite, safe mode is enabled by default. Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . Click Create Template . In the Name field, enter a unique name for your report template. If you want the template to be available to all locations and organizations, select Default . Create the template directly in the template editor or import a template from a text file by clicking Import . For more information about importing templates, see Section 11.5, "Importing report templates" . Optional: In the Audit Comment field, you can add any useful information about this template. Click the Input tab, and in the Name field, enter a name for the input that you can reference in the template in the following format: input('name') . Note that you must save the template before you can reference this input value in the template body. Select whether the input value is mandatory. If the input value is mandatory, select the Required checkbox. From the Value Type list, select the type of input value that the user must input. Optional: If you want to use facts for template input, select the Advanced checkbox. Optional: In the Options field, define the options that the user can select from. If this field remains undefined, the users receive a free-text field in which they can enter the value they want. Optional: In the Default field, enter a value, for example, a host name, that you want to set as the default template input. Optional: In the Description field, you can enter information that you want to display as inline help about the input when you generate the report. Optional: Click the Type tab, and select whether this template is a snippet to be included in other templates. Click the Location tab and add the locations where you want to use the template. Click the Organizations tab and add the organizations where you want to use the template. Click Submit to save your changes. Additional resources For more information about safe mode, see Section 11.8, "Report template safe mode" . For more information about writing templates, see Appendix B, Template writing reference . For more information about macros you can use in report templates, see Section B.6, "Template macros" . 11.3. Exporting report templates You can export report templates that you create in Satellite. Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . Locate the template that you want to export, and from the list in the Actions column, select Export . Repeat this action for every report template that you want to download. An .erb file that contains the template downloads. CLI procedure To view the report templates available for export, enter the following command: Note the template ID of the template that you want to export in the output of this command. To export a report template, enter the following command: 11.4. Exporting report templates using the Satellite API You can use the Satellite report_templates API to export report templates from Satellite. For more information about using the Satellite API, see Using the Satellite REST API . Procedure Use the following request to retrieve a list of available report templates: Example request: In this example, the json_reformat tool is used to format the JSON output. Example response: Note the id of the template that you want to export, and use the following request to export the template: Example request: Note that 158 is an example ID of the template to export. In this example, the exported template is redirected to host_complete_list.erb . 11.5. Importing report templates You can import a report template into the body of a new template that you want to create. Note that using the Satellite web UI, you can only import templates individually. For bulk actions, use the Satellite API. For more information, see Section 11.6, "Importing report templates using the Satellite API" . Prerequisites You must have exported templates from Satellite to import them to use in new templates. For more information see Section 11.3, "Exporting report templates" . Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . In the upper right of the Report Templates window, click Create Template . On the upper right of the Editor tab, click the folder icon, and select the .erb file that you want to import. Edit the template to suit your requirements. Click Submit . For more information about customizing your new template, see Appendix B, Template writing reference . 11.6. Importing report templates using the Satellite API You can use the Satellite API to import report templates into Satellite. Importing report templates using the Satellite API automatically parses the report template metadata and assigns organizations and locations. For more information about using the Satellite API, see the Using the Satellite REST API . Prerequisites Create a template using .erb syntax or export a template from another Satellite. For more information about writing templates, see Appendix B, Template writing reference . For more information about exporting templates from Satellite, see Section 11.4, "Exporting report templates using the Satellite API" . Procedure Use the following example to format the template that you want to import to a .json file: Example JSON file with ERB template: Use the following request to import the template: Use the following request to retrieve a list of report templates and validate that you can view the template in Satellite: 11.7. Generating a list of installed packages Use this procedure to generate a list of installed packages in Report Templates . Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . To the right of Host - All Installed Packages , click Generate . Optional: Use the Hosts filter search field to search for and apply specific host filters. Click Generate . If the download does not start automatically, click Download . Verification You have the spreadsheet listing the installed packages for the selected hosts downloaded on your machine. 11.8. Report template safe mode When you create report templates in Satellite, safe mode is enabled by default. Safe mode limits the macros and variables that you can use in the report template. Safe mode prevents rendering problems and enforces best practices in report templates. The list of supported macros and variables is available in the Satellite web UI. To view the macros and variables that are available, in the Satellite web UI, navigate to Monitor > Reports > Report Templates and click Create Template . In the Create Template window, click the Help tab and expand Safe mode methods . While safe mode is enabled, if you try to use a macro or variable that is not listed in Safe mode methods , the template editor displays an error message. To view the status of safe mode in Satellite, in the Satellite web UI, navigate to Administer > Settings and click the Provisioning tab. Locate the Safemode rendering row to check the value. | [
"hammer report-template list",
"hammer report-template generate --id My_Template_ID",
"hammer report-template generate --inputs \"Days from Now=no limit\" --name \"Subscription - General Report\"",
"hammer report-template generate --inputs \"Days from Now=60\" --name \"Subscription - General Report\"",
"hammer report-template list",
"hammer report-template dump --id My_Template_ID > example_export .erb",
"curl --insecure --user My_User_Name : My_Password --request GET --config https:// satellite.example.com /api/report_templates | json_reformat",
"{ \"total\": 6, \"subtotal\": 6, \"page\": 1, \"per_page\": 20, \"search\": null, \"sort\": { \"by\": null, \"order\": null }, \"results\": [ { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Applicable errata\", \"id\": 112 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Applied Errata\", \"id\": 113 }, { \"created_at\": \"2019-11-30 16:15:24 UTC\", \"updated_at\": \"2019-11-30 16:15:24 UTC\", \"name\": \"Hosts - complete list\", \"id\": 158 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Host statuses\", \"id\": 114 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Registered hosts\", \"id\": 115 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Subscriptions\", \"id\": 116 } ] }",
"curl --insecure --output /tmp/_Example_Export_Template .erb_ --user admin:password --request GET --config https:// satellite.example.com /api/report_templates/ My_Template_ID /export",
"cat Example_Template .json { \"name\": \" Example Template Name \", \"template\": \" Enter ERB Code Here \" }",
"{ \"name\": \"Hosts - complete list\", \"template\": \" <%# name: Hosts - complete list snippet: false template_inputs: - name: host required: false input_type: user advanced: false value_type: plain resource_type: Katello::ActivationKey model: ReportTemplate -%> <% load_hosts(search: input('host')).each_record do |host| -%> <% report_row( 'Server FQDN': host.name ) -%> <% end -%> <%= report_render %> \" }",
"curl --insecure --user My_User_Name : My_Password --data @ Example_Template .json --header \"Content-Type:application/json\" --request POST --config https:// satellite.example.com /api/report_templates/import",
"curl --insecure --user My_User_Name : My_Password --request GET --config https:// satellite.example.com /api/report_templates | json_reformat"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/Using_Report_Templates_to_Monitor_Hosts_managing-hosts |
Installing on IBM Z and IBM LinuxONE | Installing on IBM Z and IBM LinuxONE OpenShift Container Platform 4.18 Installing OpenShift Container Platform on IBM Z and IBM LinuxONE Red Hat OpenShift Documentation Team | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0",
"ipl c",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc debug node/<node_name> chroot /host",
"cat /sys/firmware/ipl/secure",
"1 1",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0",
"ipl c",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc debug node/<node_name> chroot /host",
"cat /sys/firmware/ipl/secure",
"1 1",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"prot_virt: Reserving <amount>MB as ultravisor base storage.",
"cat /sys/firmware/uv/prot_virt_host",
"1",
"{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```",
"base64 <your-hostkey>.crt",
"gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign",
"[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.",
"Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key",
"variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 zfcp.allow_lun_scan=0",
"qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}",
"virt-install --noautoconsole --connect qemu:///system --name <vm_name> --memory <memory_mb> --vcpus <vcpus> --disk <disk> --launchSecurity type=\"s390-pv\" \\ 1 --import --network network=<virt_network_parm>,mac=<mac_address> --disk path=<ign_file>,format=raw,readonly=on,serial=ignition,startup_policy=optional 2",
"virt-install --connect qemu:///system --name <vm_name> --memory <memory_mb> --vcpus <vcpus> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ / 1 --disk <vm_name>.qcow2,size=<image_size>,cache=none,io=native --network network=<virt_network_parm> --boot hd --extra-args \"rd.neednet=1\" --extra-args \"coreos.inst.install_dev=/dev/<block_device>\" --extra-args \"coreos.inst.ignition_url=http://<http_server>/bootstrap.ign\" \\ 2 --extra-args \"coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img\" \\ 3 --extra-args \"ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns>\" --noautoconsole --wait",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"prot_virt: Reserving <amount>MB as ultravisor base storage.",
"cat /sys/firmware/uv/prot_virt_host",
"1",
"{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```",
"base64 <your-hostkey>.crt",
"gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign",
"[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.",
"Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key",
"variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 zfcp.allow_lun_scan=0",
"qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}",
"virt-install --noautoconsole --connect qemu:///system --name <vm_name> --memory <memory_mb> --vcpus <vcpus> --disk <disk> --launchSecurity type=\"s390-pv\" \\ 1 --import --network network=<virt_network_parm>,mac=<mac_address> --disk path=<ign_file>,format=raw,readonly=on,serial=ignition,startup_policy=optional 2",
"virt-install --connect qemu:///system --name <vm_name> --memory <memory_mb> --vcpus <vcpus> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ / 1 --disk <vm_name>.qcow2,size=<image_size>,cache=none,io=native --network network=<virt_network_parm> --boot hd --extra-args \"rd.neednet=1\" --extra-args \"coreos.inst.install_dev=/dev/<block_device>\" --extra-args \"coreos.inst.ignition_url=http://<http_server>/bootstrap.ign\" \\ 2 --extra-args \"coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img\" \\ 3 --extra-args \"ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns>\" --noautoconsole --wait",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc debug node/<node_name> chroot /host",
"cat /sys/firmware/ipl/secure",
"1 1",
"lsreipl",
"Re-IPL type: fcp WWPN: 0x500507630400d1e3 LUN: 0x4001400e00000000 Device: 0.0.810e bootprog: 0 br_lba: 0 Loadparm: \"\" Bootparms: \"\" clear: 0",
"for DASD output: Re-IPL type: ccw Device: 0.0.525d Loadparm: \"\" clear: 0",
"sudo shutdown -h",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc debug node/<node_name> chroot /host",
"cat /sys/firmware/ipl/secure",
"1 1",
"lsreipl",
"Re-IPL type: fcp WWPN: 0x500507630400d1e3 LUN: 0x4001400e00000000 Device: 0.0.810e bootprog: 0 br_lba: 0 Loadparm: \"\" Bootparms: \"\" clear: 0",
"for DASD output: Re-IPL type: ccw Device: 0.0.525d Loadparm: \"\" clear: 0",
"sudo shutdown -h",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"networking: ovnKubernetesConfig: ipv4: internalJoinSubnet:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker0 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker0]} nodeSelector: matchLabels: node-role.kubernetes.io/worker0: \"\"",
"ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.8000\", DRIVER==\"zfcp\", GOTO=\"cfg_zfcp_host_0.0.8000\" ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"zfcp\", TEST==\"[ccw/0.0.8000]\", GOTO=\"cfg_zfcp_host_0.0.8000\" GOTO=\"end_zfcp_host_0.0.8000\" LABEL=\"cfg_zfcp_host_0.0.8000\" ATTR{[ccw/0.0.8000]online}=\"1\" LABEL=\"end_zfcp_host_0.0.8000\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-host-0.0.8000.rules 3",
"ACTION==\"add\", SUBSYSTEMS==\"ccw\", KERNELS==\"0.0.8000\", GOTO=\"start_zfcp_lun_0.0.8207\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"start_zfcp_lun_0.0.8000\" SUBSYSTEM==\"fc_remote_ports\", ATTR{port_name}==\"0x500507680d760026\", GOTO=\"cfg_fc_0.0.8000_0x500507680d760026\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"cfg_fc_0.0.8000_0x500507680d760026\" ATTR{[ccw/0.0.8000]0x500507680d760026/unit_add}=\"0x00bc000000000000\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"end_zfcp_lun_0.0.8000\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-lun-0.0.8000:0x500507680d760026:0x00bc000000000000.rules 3",
"ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.4444\", DRIVER==\"dasd-eckd\", GOTO=\"cfg_dasd_eckd_0.0.4444\" ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"dasd-eckd\", TEST==\"[ccw/0.0.4444]\", GOTO=\"cfg_dasd_eckd_0.0.4444\" GOTO=\"end_dasd_eckd_0.0.4444\" LABEL=\"cfg_dasd_eckd_0.0.4444\" ATTR{[ccw/0.0.4444]online}=\"1\" LABEL=\"end_dasd_eckd_0.0.4444\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3",
"ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1000\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1001\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1002\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccwgroup\", KERNEL==\"0.0.1000\", DRIVER==\"qeth\", GOTO=\"cfg_qeth_0.0.1000\" GOTO=\"end_qeth_0.0.1000\" LABEL=\"group_qeth_0.0.1000\" TEST==\"[ccwgroup/0.0.1000]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1000]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1001]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1002]\", GOTO=\"end_qeth_0.0.1000\" ATTR{[drivers/ccwgroup:qeth]group}=\"0.0.1000,0.0.1001,0.0.1002\" GOTO=\"end_qeth_0.0.1000\" LABEL=\"cfg_qeth_0.0.1000\" ATTR{[ccwgroup/0.0.1000]online}=\"1\" LABEL=\"end_qeth_0.0.1000\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3",
"ssh <user>@<node_ip_address>",
"oc debug node/<node_name>",
"sudo chzdev -e <device>",
"ssh <user>@<node_ip_address>",
"oc debug node/<node_name>",
"sudo /sbin/mpathconf --enable",
"sudo multipath",
"sudo fdisk /dev/mapper/mpatha",
"sudo multipath -ll",
"mpatha (20017380030290197) dm-1 IBM,2810XIV size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw -+- policy='service-time 0' prio=50 status=enabled |- 1:0:0:6 sde 68:16 active ready running |- 1:0:1:6 sdf 69:24 active ready running |- 0:0:0:6 sdg 8:80 active ready running `- 0:0:1:6 sdh 66:48 active ready running"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/installing_on_ibm_z_and_ibm_linuxone/index |
Chapter 3. Setting up the environment for an OpenShift installation | Chapter 3. Setting up the environment for an OpenShift installation 3.1. Installing RHEL on the provisioner node With the configuration of the prerequisites complete, the step is to install RHEL 9.x on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the OpenShift Container Platform cluster. For the purposes of this document, installing RHEL on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media. 3.2. Preparing the provisioner node for OpenShift Container Platform installation Perform the following steps to prepare the environment. Procedure Log in to the provisioner node via ssh . Create a non-root user ( kni ) and provide that user with sudo privileges: # useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kni Create an ssh key for the new user: # su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''" Log in as the new user on the provisioner node: # su - kni Use Red Hat Subscription Manager to register the provisioner node: USD sudo subscription-manager register --username=<user> --password=<pass> --auto-attach USD sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms Note For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager . Install the following packages: USD sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool Modify the user to add the libvirt group to the newly created user: USD sudo usermod --append --groups libvirt <user> Restart firewalld and enable the http service: USD sudo systemctl start firewalld USD sudo firewall-cmd --zone=public --add-service=http --permanent USD sudo firewall-cmd --reload Start and enable the libvirtd service: USD sudo systemctl enable libvirtd --now Create the default storage pool and start it: USD sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images USD sudo virsh pool-start default USD sudo virsh pool-autostart default Create a pull-secret.txt file: USD vim pull-secret.txt In a web browser, navigate to Install OpenShift on Bare Metal with installer-provisioned infrastructure . Click Copy pull secret . Paste the contents into the pull-secret.txt file and save the contents in the kni user's home directory. 3.3. Checking NTP server synchronization The OpenShift Container Platform installation program installs the chrony Network Time Protocol (NTP) service on the cluster nodes. To complete installation, each node must have access to an NTP time server. You can verify NTP server synchronization by using the chrony service. For disconnected clusters, you must configure the NTP servers on the control plane nodes. For more information see the Additional resources section. Prerequisites You installed the chrony package on the target node. Procedure Log in to the node by using the ssh command. View the NTP servers available to the node by running the following command: USD chronyc sources Example output MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms Use the ping command to ensure that the node can access an NTP server, for example: USD ping time.cloudflare.com Example output PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms ... Additional resources Optional: Configuring NTP for disconnected clusters Network Time Protocol (NTP) 3.4. Configuring networking Before installation, you must configure the networking on the provisioner node. Installer-provisioned clusters deploy with a bare-metal bridge and network, and an optional provisioning bridge and network. Note You can also configure networking from the web console. Procedure Export the bare-metal network NIC name by running the following command: USD export PUB_CONN=<baremetal_nic_name> Configure the bare-metal network: Note The SSH connection might disconnect after executing these steps. For a network using DHCP, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal pkill dhclient;dhclient baremetal " 1 Replace <con_name> with the connection name. For a network using static IP addressing and no DHCP network, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr "x.x.x.x/yy" ipv4.gateway "a.a.a.a" ipv4.dns "b.b.b.b" 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal nmcli con up baremetal " 1 Replace <con_name> with the connection name. Replace x.x.x.x/yy with the IP address and CIDR for the network. Replace a.a.a.a with the network gateway. Replace b.b.b.b with the IP address of the DNS server. Optional: If you are deploying with a provisioning network, export the provisioning network NIC name by running the following command: USD export PROV_CONN=<prov_nic_name> Optional: If you are deploying with a provisioning network, configure the provisioning network by running the following command: USD sudo nohup bash -c " nmcli con down \"USDPROV_CONN\" nmcli con delete \"USDPROV_CONN\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \"USDPROV_CONN\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning " Note The SSH connection might disconnect after executing these steps. The IPv6 address can be any address that is not routable through the bare-metal network. Ensure that UEFI is enabled and UEFI PXE settings are set to the IPv6 protocol when using IPv6 addressing. Optional: If you are deploying with a provisioning network, configure the IPv4 address on the provisioning network connection by running the following command: USD nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual SSH back into the provisioner node (if required) by running the following command: # ssh kni@provisioner.<cluster-name>.<domain> Verify that the connection bridges have been properly created by running the following command: USD sudo nmcli con show Example output NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2 3.5. Creating a manifest object that includes a customized br-ex bridge As an alternative to using the configure-ovs.sh shell script to set a br-ex bridge on a bare-metal platform, you can create a MachineConfig object that includes an NMState configuration file. The NMState configuration file creates a customized br-ex bridge network configuration on each node in your cluster. Consider the following use cases for creating a manifest object that includes a customized br-ex bridge: You want to make postinstallation changes to the bridge, such as changing the Open vSwitch (OVS) or OVN-Kubernetes br-ex bridge network. The configure-ovs.sh shell script does not support making postinstallation changes to the bridge. You want to deploy the bridge on a different interface than the interface available on a host or server IP address. You want to make advanced configurations to the bridge that are not possible with the configure-ovs.sh shell script. Using the script for these configurations might result in the bridge failing to connect multiple network interfaces and facilitating data forwarding between the interfaces. Note If you require an environment with a single network interface controller (NIC) and default network settings, use the configure-ovs.sh shell script. After you install Red Hat Enterprise Linux CoreOS (RHCOS) and the system reboots, the Machine Config Operator injects Ignition configuration files into each node in your cluster, so that each node received the br-ex bridge network configuration. To prevent configuration conflicts, the configure-ovs.sh shell script receives a signal to not configure the br-ex bridge. Prerequisites Optional: You have installed the nmstate API so that you can validate the NMState configuration. Procedure Create a NMState configuration file that has decoded base64 information for your customized br-ex bridge network: Example of an NMState configuration for a customized br-ex bridge network interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false # ... 1 Name of the interface. 2 The type of ethernet. 3 The requested state for the interface after creation. 4 Disables IPv4 and IPv6 in this example. 5 The node NIC to which the bridge attaches. Use the cat command to base64-encode the contents of the NMState configuration: USD cat <nmstate_configuration>.yaml | base64 1 1 Replace <nmstate_configuration> with the name of your NMState resource YAML file. Create a MachineConfig manifest file and define a customized br-ex bridge network configuration analogous to the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml # ... 1 For each node in your cluster, specify the hostname path to your node and the base-64 encoded Ignition configuration file data for the machine type. If you have a single global configuration specified in an /etc/nmstate/openshift/cluster.yml configuration file that you want to apply to all nodes in your cluster, you do not need to specify the hostname path for each node. The worker role is the default role for nodes in your cluster. The .yaml extension does not work when specifying the hostname path for each node or all nodes in the MachineConfig manifest file. 2 The name of the policy. 3 Writes the encoded base64 information to the specified path. 3.5.1. Optional: Scaling each machine set to compute nodes To apply a customized br-ex bridge configuration to all compute nodes in your OpenShift Container Platform cluster, you must edit your MachineConfig custom resource (CR) and modify its roles. Additionally, you must create a BareMetalHost CR that defines information for your bare-metal machine, such as hostname, credentials, and so on. After you configure these resources, you must scale machine sets, so that the machine sets can apply the resource configuration to each compute node and reboot the nodes. Prerequisites You created a MachineConfig manifest object that includes a customized br-ex bridge configuration. Procedure Edit the MachineConfig CR by entering the following command: USD oc edit mc <machineconfig_custom_resource_name> Add each compute node configuration to the CR, so that the CR can manage roles for each defined compute node in your cluster. Create a Secret object named extraworker-secret that has a minimal static IP configuration. Apply the extraworker-secret secret to each node in your cluster by entering the following command. This step provides each compute node access to the Ignition config file. USD oc apply -f ./extraworker-secret.yaml Create a BareMetalHost resource and specify the network secret in the preprovisioningNetworkDataName parameter: Example BareMetalHost resource with an attached network secret apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: # ... preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret # ... To manage the BareMetalHost object within the openshift-machine-api namespace of your cluster, change to the namespace by entering the following command: USD oc project openshift-machine-api Get the machine sets: USD oc get machinesets Scale each machine set by entering the following command. You must run this command for each machine set. USD oc scale machineset <machineset_name> --replicas=<n> 1 1 Where <machineset_name> is the name of the machine set and <n> is the number of compute nodes. 3.6. Establishing communication between subnets In a typical OpenShift Container Platform cluster setup, all nodes, including the control plane and compute nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. This often involves using different network segments or subnets for the remote nodes than the subnet used by the control plane and local compute nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability. Before installing OpenShift Container Platform, you must configure the network properly to ensure that the edge subnets containing the remote nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too. You can run control plane nodes in the same subnet or multiple subnets by configuring a user-managed load balancer in place of the default load balancer. With a multiple subnet environment, you can reduce the risk of your OpenShift Container Platform cluster from failing because of a hardware failure or a network outage. For more information, see "Services for a user-managed load balancer" and "Configuring a user-managed load balancer". Running control plane nodes in a multiple subnet environment requires completion of the following key tasks: Configuring a user-managed load balancer instead of the default load balancer by specifying UserManaged in the loadBalancer.type parameter of the install-config.yaml file. Configuring a user-managed load balancer address in the ingressVIPs and apiVIPs parameters of the install-config.yaml file. Adding the multiple subnet Classless Inter-Domain Routing (CIDR) and the user-managed load balancer IP addresses to the networking.machineNetworks parameter in the install-config.yaml file. Note Deploying a cluster with multiple subnets requires using virtual media, such as redfish-virtualmedia and idrac-virtualmedia . This procedure details the network configuration required to allow the remote compute nodes in the second subnet to communicate effectively with the control plane nodes in the first subnet and to allow the control plane nodes in the first subnet to communicate effectively with the remote compute nodes in the second subnet. In this procedure, the cluster spans two subnets: The first subnet ( 10.0.0.0 ) contains the control plane and local compute nodes. The second subnet ( 192.168.0.0 ) contains the edge compute nodes. Procedure Configure the first subnet to communicate with the second subnet: Log in as root to a control plane node by running the following command: USD sudo su - Get the name of the network interface by running the following command: # nmcli dev status Add a route to the second subnet ( 192.168.0.0 ) via the gateway by running the following command: # nmcli connection modify <interface_name> +ipv4.routes "192.168.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "192.168.0.0/24 via 192.168.0.1" Apply the changes by running the following command: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully: # ip route Repeat the steps for each control plane node in the first subnet. Note Adjust the commands to match your actual interface names and gateway. Configure the second subnet to communicate with the first subnet: Log in as root to a remote compute node by running the following command: USD sudo su - Get the name of the network interface by running the following command: # nmcli dev status Add a route to the first subnet ( 10.0.0.0 ) via the gateway by running the following command: # nmcli connection modify <interface_name> +ipv4.routes "10.0.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "10.0.0.0/24 via 10.0.0.1" Apply the changes by running the following command: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully by running the following command: # ip route Repeat the steps for each compute node in the second subnet. Note Adjust the commands to match your actual interface names and gateway. After you have configured the networks, test the connectivity to ensure the remote nodes can reach the control plane nodes and the control plane nodes can reach the remote nodes. From the control plane nodes in the first subnet, ping a remote node in the second subnet by running the following command: USD ping <remote_node_ip_address> If the ping is successful, it means the control plane nodes in the first subnet can reach the remote nodes in the second subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node. From the remote nodes in the second subnet, ping a control plane node in the first subnet by running the following command: USD ping <control_plane_node_ip_address> If the ping is successful, it means the remote compute nodes in the second subnet can reach the control plane in the first subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node. 3.7. Retrieving the OpenShift Container Platform installer Use the stable-4.x version of the installation program and your selected architecture to deploy the generally available stable version of OpenShift Container Platform: USD export VERSION=stable-4.16 USD export RELEASE_ARCH=<architecture> USD export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}') 3.8. Extracting the OpenShift Container Platform installer After retrieving the installer, the step is to extract it. Procedure Set the environment variables: USD export cmd=openshift-baremetal-install USD export pullsecret_file=~/pull-secret.txt USD export extract_dir=USD(pwd) Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc Extract the installer: USD sudo cp oc /usr/local/bin USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=USDcmd --to "USD{extract_dir}" USD{RELEASE_IMAGE} USD sudo cp openshift-baremetal-install /usr/local/bin 3.9. Optional: Creating an RHCOS images cache To employ image caching, you must download the Red Hat Enterprise Linux CoreOS (RHCOS) image used by the bootstrap VM to provision the cluster nodes. Image caching is optional, but it is especially useful when running the installation program on a network with limited bandwidth. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. If you are running the installation program on a network with limited bandwidth and the RHCOS images download takes more than 15 to 20 minutes, the installation program will timeout. Caching images on a web server will help in such scenarios. Warning If you enable TLS for the HTTPD server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and spoke clusters and the HTTPD server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. Install a container that contains the images. Procedure Install podman : USD sudo dnf install -y podman Open firewall port 8080 to be used for RHCOS image caching: USD sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent USD sudo firewall-cmd --reload Create a directory to store the bootstraposimage : USD mkdir /home/kni/rhcos_image_cache Set the appropriate SELinux context for the newly created directory: USD sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?" USD sudo restorecon -Rv /home/kni/rhcos_image_cache/ Get the URI for the RHCOS image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk.location') Get the name of the image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/} Get the SHA hash for the RHCOS image that will be deployed on the bootstrap VM: USD export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk["uncompressed-sha256"]') Download the image and place it in the /home/kni/rhcos_image_cache directory: USD curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME} Confirm SELinux type is of httpd_sys_content_t for the new file: USD ls -Z /home/kni/rhcos_image_cache Create the pod: USD podman run -d --name rhcos_image_cache \ 1 -v /home/kni/rhcos_image_cache:/var/www/html \ -p 8080:8080/tcp \ registry.access.redhat.com/ubi9/httpd-24 1 Creates a caching webserver with the name rhcos_image_cache . This pod serves the bootstrapOSImage image in the install-config.yaml file for deployment. Generate the bootstrapOSImage configuration: USD export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d"/" -f1) USD export BOOTSTRAP_OS_IMAGE="http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}" USD echo " bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}" Add the required configuration to the install-config.yaml file under platform.baremetal : platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1 1 Replace <bootstrap_os_image> with the value of USDBOOTSTRAP_OS_IMAGE . See the "Configuring the install-config.yaml file" section for additional details. 3.10. Services for a user-managed load balancer You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer. Important Configuring a user-managed load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for a user-managed load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for a user-managed load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 3.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 3.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 3.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for user-managed load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure a user-managed load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the user-managed load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the user-managed load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 3.10.1. Configuring a user-managed load balancer You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer. Important Before you configure a user-managed load balancer, ensure that you read the "Services for a user-managed load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your user-managed load balancer. Note MetalLB, which runs on a cluster, functions as a user-managed load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples show health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 22623, 443, and 80. Depending on your needs, you can specify the IP address of a single subnet or IP addresses from multiple subnets in your HAProxy configuration. Example HAProxy configuration with one listed subnet # ... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Example HAProxy configuration with multiple listed subnets # ... listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s # ... Use the curl CLI command to verify that the user-managed load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. For your OpenShift Container Platform cluster to use the user-managed load balancer, you must specify the following configuration in your cluster's install-config.yaml file: # ... platform: baremetal: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3 # ... 1 Set UserManaged for the type parameter to specify a user-managed load balancer for your cluster. The parameter defaults to OpenShiftManagedDefault , which denotes the default internal load balancer. For services defined in an openshift-kni-infra namespace, a user-managed load balancer can deploy the coredns service to pods in your cluster but ignores keepalived and haproxy services. 2 Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the Kubernetes API can communicate with the user-managed load balancer. 3 Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster. Verification Use the curl CLI command to verify that the user-managed load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 3.11. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, NetworkManager sets the hostnames. By default, DHCP provides the hostnames to NetworkManager , which is the recommended method. NetworkManager gets the hostnames through a reverse DNS lookup in the following cases: If DHCP does not provide the hostnames If you use kernel arguments to set the hostnames If you use another method to set the hostnames Reverse DNS lookup occurs after the network has been initialized on a node, and can increase the time it takes NetworkManager to set the hostname. Other system services can start prior to NetworkManager setting the hostname, which can cause those services to use a default hostname such as localhost . Tip You can avoid the delay in setting hostnames by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.12. Configuring the install-config.yaml file 3.12.1. Configuring the install-config.yaml file The install-config.yaml file requires some additional details. Most of the information teaches the installation program and the resulting cluster enough about the available hardware that it is able to fully manage it. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. Configure install-config.yaml . Change the appropriate variables to match the environment, including pullSecret and sshKey : apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>' 1 Scale the compute machines based on the number of compute nodes that are part of the OpenShift Container Platform cluster. Valid options for the replicas value are 0 and integers greater than or equal to 2 . Set the number of replicas to 0 to deploy a three-node cluster, which contains only three control plane machines. A three-node cluster is a smaller, more resource-efficient cluster that can be used for testing, development, and production. You cannot install the cluster with only one compute node. 2 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticIP configuration setting to specify the static IP address of the bootstrap VM when there is no DHCP server on the bare-metal network. 3 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticGateway configuration setting to specify the gateway IP address for the bootstrap VM when there is no DHCP server on the bare-metal network. 4 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticDNS configuration setting to specify the DNS address for the bootstrap VM when there is no DHCP server on the bare-metal network. 5 See the BMC addressing sections for more options. 6 To set the path to the installation disk drive, enter the kernel name of the disk. For example, /dev/sda . Important Because the disk discovery order is not guaranteed, the kernel name of the disk can change across booting options for machines with multiple disks. For example, /dev/sda becomes /dev/sdb and vice versa. To avoid this issue, you must use persistent disk attributes, such as the disk World Wide Name (WWN) or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. To use the disk WWN, replace the deviceName parameter with the wwnWithExtension parameter. Depending on the parameter that you use, enter either of the following values: The disk name. For example, /dev/sda , or /dev/disk/by-path/ . The disk WWN. For example, "0x64cd98f04fde100024684cf3034da5c2" . Ensure that you enter the disk WWN value within quotes so that it is used as a string value and not a hexadecimal value. Failure to meet these requirements for the rootDeviceHints parameter might result in the following error: ironic-inspector inspection failed: No disks satisfied root device hints Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP and ingressVIP configuration settings. In OpenShift Container Platform 4.12 and later, these configuration settings are deprecated. Instead, use a list format in the apiVIPs and ingressVIPs configuration settings to specify IPv4 addresses, IPv6 addresses, or both IP address formats. Create a directory to store the cluster configuration: USD mkdir ~/clusterconfigs Copy the install-config.yaml file to the new directory: USD cp install-config.yaml ~/clusterconfigs Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off Remove old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done 3.12.2. Additional install-config parameters See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file. Table 3.1. Required parameters Parameters Default Description baseDomain The domain name for the cluster. For example, example.com . bootMode UEFI The boot mode for a node. Options are legacy , UEFI , and UEFISecureBoot . If bootMode is not set, Ironic sets it while inspecting the node. bootstrapExternalStaticDNS The static network DNS of the bootstrap node. You must set this value when deploying a cluster with static IP addresses when there is no Dynamic Host Configuration Protocol (DHCP) server on the bare-metal network. If you do not set this value, the installation program will use the value from bootstrapExternalStaticGateway , which causes problems when the IP address values of the gateway and DNS are different. bootstrapExternalStaticIP The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. bootstrapExternalStaticGateway The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. sshKey The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and compute nodes. Typically, this key is from the provisioner node. pullSecret The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node. The name to be given to the OpenShift Container Platform cluster. For example, openshift . The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 . The OpenShift Container Platform cluster requires a name be provided for compute nodes even if there are zero nodes. Replicas sets the number of compute nodes in the OpenShift Container Platform cluster. The OpenShift Container Platform cluster requires a name for control plane nodes. Replicas sets the number of control plane nodes included as part of the OpenShift Container Platform cluster. provisioningNetworkInterface The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. apiVIPs (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP configuration setting. From OpenShift Container Platform 4.12 or later, the apiVIP configuration setting is deprecated. Instead, use a list format for the apiVIPs configuration setting to specify an IPv4 address, an IPv6 address or both IP address formats. disableCertificateVerification False redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. ingressVIPs (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the ingressVIP configuration setting. In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a list format for the ingressVIPs configuration setting to specify an IPv4 addresses, an IPv6 addresses or both IP address formats. Table 3.2. Optional Parameters Parameters Default Description provisioningDHCPRange 172.22.0.10,172.22.0.100 Defines the IP range for nodes on the provisioning network. provisioningNetworkCIDR 172.22.0.0/24 The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. clusterProvisioningIP The third IP address of the provisioningNetworkCIDR . The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 . bootstrapProvisioningIP The second IP address of the provisioningNetworkCIDR . The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 . externalBridge baremetal The name of the bare-metal bridge of the hypervisor attached to the bare-metal network. provisioningBridge provisioning The name of the provisioning bridge on the provisioner host attached to the provisioning network. architecture Defines the host architecture for your cluster. Valid values are amd64 or arm64 . defaultMachinePlatform The default configuration used for machine pools without a platform configuration. bootstrapOSImage A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256> . provisioningNetwork The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the bare-metal network. If Disabled , you must provide two IP addresses on the bare-metal network that are used for the provisioning services. Managed : Set this parameter to Managed , which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on. Unmanaged : Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required. httpProxy Set this parameter to the appropriate HTTP proxy used within your environment. httpsProxy Set this parameter to the appropriate HTTPS proxy used within your environment. noProxy Set this parameter to the appropriate list of exclusions for proxy usage within your environment. Hosts The hosts parameter is a list of separate bare metal assets used to build the cluster. Table 3.3. Hosts Name Default Description name The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0 . role The role of the bare metal node. Either master (control plane node) or worker (compute node). bmc Connection details for the baseboard management controller. See the BMC addressing section for additional details. bootMACAddress The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host. Note You must provide a valid MAC address from the host if you disabled the provisioning network. networkConfig Set this optional parameter to configure the network interface of a host. See "(Optional) Configuring host network interfaces" for additional details. 3.12.3. BMC addressing Most vendors support Baseboard Management Controller (BMC) addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI. You can modify the BMC address during installation while the node is in the Registering state. If you need to modify the BMC address after the node leaves the Registering state, you must disconnect the node from Ironic, edit the BareMetalHost resource, and reconnect the node to Ironic. See the Editing a BareMetalHost resource section for details. IPMI Hosts using IPMI use the ipmi://<out-of-band-ip>:<port> address format, which defaults to port 623 if not specified. The following example demonstrates an IPMI configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password> Important The provisioning network is required when PXE booting using IPMI for BMC addressing. It is not possible to PXE boot hosts without a provisioning network. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. Redfish network boot To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Additional resources Editing a BareMetalHost resource 3.12.4. Verifying support for Redfish APIs When installing using the Redfish API, the installation program calls several Redfish endpoints on the baseboard management controller (BMC) when using installer-provisioned infrastructure on bare metal. If you use Redfish, ensure that your BMC supports all of the Redfish APIs before installation. Procedure Set the IP address or hostname of the BMC by running the following command: USD export SERVER=<ip_address> 1 1 Replace <ip_address> with the IP address or hostname of the BMC. Set the ID of the system by running the following command: USD export SystemID=<system_id> 1 1 Replace <system_id> with the system ID. For example, System.Embedded.1 or 1 . See the following vendor-specific BMC sections for details. List of Redfish APIs Check power on support by running the following command: USD curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "On"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Check power off support by running the following command: USD curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "ForceOff"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Check the temporary boot implementation that uses pxe by running the following command: USD curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: <ETAG>" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "pxe", "BootSourceOverrideEnabled": "Once"}} Check the status of setting the firmware boot mode that uses Legacy or UEFI by running the following command: USD curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: <ETAG>" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideMode":"UEFI"}} List of Redfish virtual media APIs Check the ability to set the temporary boot device that uses cd or dvd by running the following command: USD curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: <ETAG>" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "cd", "BootSourceOverrideEnabled": "Once"}}' Virtual media might use POST or PATCH , depending on your hardware. Check the ability to mount virtual media by running one of the following commands: USD curl -u USDUSER:USDPASS -X POST -H "Content-Type: application/json" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{"Image": "https://example.com/test.iso", "TransferProtocolType": "HTTPS", "UserName": "", "Password":""}' USD curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: <ETAG>" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{"Image": "https://example.com/test.iso", "TransferProtocolType": "HTTPS", "UserName": "", "Password":""}' Note The PowerOn and PowerOff commands for Redfish APIs are the same for the Redfish virtual media APIs. In some hardware, you might only find the VirtualMedia resource under Systems/USDSystemID instead of Managers/USDManagerID . For the VirtualMedia resource, the UserName and Password fields are optional. Important HTTPS and HTTP are the only supported parameter types for TransferProtocolTypes . 3.12.5. BMC addressing for Dell iDRAC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Dell hardware, Red Hat supports integrated Dell Remote Access Controller (iDRAC) virtual media, Redfish network boot, and IPMI. BMC address formats for Dell iDRAC Protocol Address Format iDRAC virtual media idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 IPMI ipmi://<out-of-band-ip> Important Use idrac-virtualmedia as the protocol for Redfish virtual media. redfish-virtualmedia will not work on Dell hardware. Dell's idrac-virtualmedia uses the Redfish standard with Dell's OEM extensions. See the following sections for additional details. Redfish virtual media for Dell iDRAC For Redfish virtual media on Dell servers, use idrac-virtualmedia:// in the address setting. Using redfish-virtualmedia:// will not work. Note Use idrac-virtualmedia:// as the protocol for Redfish virtual media. Using redfish-virtualmedia:// will not work on Dell hardware, because the idrac-virtualmedia:// protocol corresponds to the idrac hardware type and the Redfish protocol in Ironic. Dell's idrac-virtualmedia:// protocol uses the Redfish standard with Dell's OEM extensions. Ironic also supports the idrac type with the WSMAN protocol. Therefore, you must specify idrac-virtualmedia:// to avoid unexpected behavior when electing to use Redfish with virtual media on Dell hardware. The following example demonstrates using iDRAC virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. Note Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Redfish network boot for iDRAC To enable Redfish, use redfish:// or redfish+http:// to disable transport layer security (TLS). The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Note There is a known issue on Dell iDRAC 9 with firmware version 04.40.00.00 and all releases up to including the 5.xx series for installer-provisioned installations on bare metal deployments. The virtual console plugin defaults to eHTML5, an enhanced version of HTML5, which causes problems with the InsertVirtualMedia workflow. Set the plugin to use HTML5 to avoid this issue. The menu path is Configuration Virtual console Plug-in Type HTML5 . Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . 3.12.6. BMC addressing for HPE iLO The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For HPE integrated Lights Out (iLO), Red Hat supports Redfish virtual media, Redfish network boot, and IPMI. Table 3.4. BMC address formats for HPE iLO Protocol Address Format Redfish virtual media redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/1 IPMI ipmi://<out-of-band-ip> See the following sections for additional details. Redfish virtual media for HPE iLO To enable Redfish virtual media for HPE servers, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Note Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media. Redfish network boot for HPE iLO To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True 3.12.7. BMC addressing for Fujitsu iRMC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Fujitsu hardware, Red Hat supports integrated Remote Management Controller (iRMC) and IPMI. Table 3.5. BMC address formats for Fujitsu iRMC Protocol Address Format iRMC irmc://<out-of-band-ip> IPMI ipmi://<out-of-band-ip> iRMC Fujitsu nodes can use irmc://<out-of-band-ip> and defaults to port 443 . The following example demonstrates an iRMC configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password> Note Currently Fujitsu supports iRMC S5 firmware version 3.05P and above for installer-provisioned installation on bare metal. 3.12.8. BMC addressing for Cisco CIMC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Cisco UCS UCSX-210C-M6 hardware, Red Hat supports Cisco Integrated Management Controller (CIMC). Table 3.6. BMC address format for Cisco CIMC Protocol Address Format Redfish virtual media redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> To enable Redfish virtual media for Cisco UCS UCSX-210C-M6 hardware, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration by using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> disableCertificateVerification: True 3.12.9. Root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 3.7. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: "/dev/sda" 3.12.10. Optional: Setting proxy settings To deploy an OpenShift Container Platform cluster using a proxy, make the following changes to the install-config.yaml file. apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR> The following is an example of noProxy with values. noProxy: .example.com,172.22.0.0/24,10.10.0.0/24 With a proxy enabled, set the appropriate values of the proxy in the corresponding key/value pair. Key considerations: If the proxy does not have an HTTPS proxy, change the value of httpsProxy from https:// to http:// . If using a provisioning network, include it in the noProxy setting, otherwise the installer will fail. Set all of the proxy settings as environment variables within the provisioner node. For example, HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . Note When provisioning with IPv6, you cannot define a CIDR address block in the noProxy settings. You must define each address separately. 3.12.11. Optional: Deploying with no provisioning network To deploy an OpenShift Container Platform cluster without a provisioning network, make the following changes to the install-config.yaml file. platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: "Disabled" 1 1 Add the provisioningNetwork configuration setting, if needed, and set it to Disabled . Important The provisioning network is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. 3.12.12. Optional: Deploying with dual-stack networking For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. To configure IPv4 and IPv6 address endpoints for cluster nodes, edit the machineNetwork , clusterNetwork , and serviceNetwork configuration settings in the install-config.yaml file. Each setting must have two CIDR entries each. For a cluster with the IPv4 family as the primary address family, specify the IPv4 setting first. For a cluster with the IPv6 family as the primary address family, specify the IPv6 setting first. machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112 Important On a bare-metal platform, if you specified an NMState configuration in the networkConfig section of your install-config.yaml file, add interfaces.wait-ip: ipv4+ipv6 to the NMState YAML file to resolve an issue that prevents your cluster from deploying on a dual-stack network. Example NMState YAML configuration file that includes the wait-ip parameter networkConfig: nmstate: interfaces: - name: <interface_name> # ... wait-ip: ipv4+ipv6 # ... To provide an interface to the cluster for applications that use IPv4 and IPv6 addresses, configure IPv4 and IPv6 virtual IP (VIP) address endpoints for the Ingress VIP and API VIP services. To configure IPv4 and IPv6 address endpoints, edit the apiVIPs and ingressVIPs configuration settings in the install-config.yaml file . The apiVIPs and ingressVIPs configuration settings use a list format. The order of the list indicates the primary and secondary VIP address for each service. platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6> Note For a cluster with dual-stack networking configuration, you must assign both IPv4 and IPv6 addresses to the same interface. 3.12.13. Optional: Configuring host network interfaces Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces using NMState. The most common use case for this functionality is to specify a static IP address on the bare-metal network, but you can also configure other networks such as a storage network. This functionality supports other NMState features such as VLAN, VXLAN, bridges, bonds, routes, MTU, and DNS resolver settings. Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState syntax with nmstatectl gc before including it in the install-config.yaml file, because the installer will not check the NMState YAML syntax. Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes using Kubernetes NMState after deployment or when expanding the cluster. Create an NMState YAML file: interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 4 -hop-interface: <next_hop_nic1_name> 5 1 2 3 4 5 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> Replace <nmstate_yaml_file> with the configuration file name. Use the networkConfig configuration setting by adding the NMState configuration to hosts within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 5 -hop-interface: <next_hop_nic1_name> 6 1 Add the NMState YAML syntax to configure the host interfaces. 2 3 4 5 6 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Important After deploying the cluster, you cannot modify the networkConfig configuration setting of install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. 3.12.14. Configuring host network interfaces for subnets For edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. To locate remote nodes in subnets, you might use different network segments or subnets for the remote nodes than you used for the control plane subnet and local compute nodes. You can reduce latency for the edge and allow for enhanced scalability by setting up subnets for edge computing scenarios. Important When using the default load balancer, OpenShiftManagedDefault and adding remote nodes to your OpenShift Container Platform cluster, all control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details. If you have established different network segments or subnets for remote nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the machineNetwork configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the networkConfig parameter for each remote node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures that the remote nodes can reach the subnet containing the control plane and that they can receive network traffic from the control plane. Note Deploying a cluster with multiple subnets requires using virtual media, such as redfish-virtualmedia or idrac-virtualmedia , because remote nodes cannot access the local provisioning network. Procedure Add the subnets to the machineNetwork in the install-config.yaml file when using static IP addresses: networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes Add the gateway and DNS configuration to the networkConfig parameter of each edge compute node using NMState syntax when using a static IP address or advanced networking such as bonds: networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4 1 Replace <interface_name> with the interface name. 2 Replace <node_ip> with the IP address of the node. 3 Replace <gateway_ip> with the IP address of the gateway. 4 Replace <dns_ip> with the IP address of the DNS server. 3.12.15. Optional: Configuring address generation modes for SLAAC in dual-stack networks For dual-stack clusters that use Stateless Address AutoConfiguration (SLAAC), you must specify a global value for the ipv6.addr-gen-mode network setting. You can set this value using NMState to configure the RAM disk and the cluster configuration files. If you do not configure a consistent ipv6.addr-gen-mode in these locations, IPv6 address mismatches can occur between CSR resources and BareMetalHost resources in the cluster. Prerequisites Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState YAML syntax with the nmstatectl gc command before including it in the install-config.yaml file because the installation program will not check the NMState YAML syntax. Create an NMState YAML file: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> 1 1 Replace <nmstate_yaml_file> with the name of the test configuration file. Add the NMState configuration to the hosts.networkConfig section within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 ... 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . 3.12.16. Optional: Configuring host network interfaces for dual port NIC Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces by using NMState to support dual port NIC. Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Virtualization only supports the following bond modes: mode=1 active-backup mode=2 balance-xor mode=4 802.3ad Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes by using Kubernetes NMState after deployment or when expanding the cluster. Procedure Add the NMState configuration to the networkConfig field to hosts within the install-config.yaml file: hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.19.17.254 -hop-interface: bond0 14 table-id: 254 1 The networkConfig field has information about the network configuration of the host, with subfields including interfaces , dns-resolver , and routes . 2 The interfaces field is an array of network interfaces defined for the host. 3 The name of the interface. 4 The type of interface. This example creates a ethernet interface. 5 Set this to `false to disable DHCP for the physical function (PF) if it is not strictly required. 6 Set to the number of SR-IOV virtual functions (VFs) to instantiate. 7 Set this to up . 8 Set this to false to disable IPv4 addressing for the VF attached to the bond. 9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps. This value must be less than or equal to the maximum transmission rate. Intel NICs do not support the min-tx-rate parameter. For more information, see BZ#1772847 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. 11 Sets the desired bond mode. 12 Sets the preferred port of the bonding interface. The bond uses the primary device as the first device of the bonding interfaces. The bond does not abandon the primary device interface unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). 13 Sets a static IP address for the bond interface. This is the node IP address. 14 Sets bond0 as the gateway for the default route. Important After deploying the cluster, you cannot change the networkConfig configuration setting of the install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. Additional resources Configuring network bonding 3.12.17. Configuring multiple cluster nodes You can simultaneously configure OpenShift Container Platform cluster nodes with identical settings. Configuring multiple cluster nodes avoids adding redundant information for each node to the install-config.yaml file. This file contains specific parameters to apply an identical configuration to multiple nodes in the cluster. Compute nodes are configured separately from the controller node. However, configurations for both node types use the highlighted parameters in the install-config.yaml file to enable multi-node configuration. Set the networkConfig parameters to BOND , as shown in the following example: hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND Note Configuration of multiple cluster nodes is only available for initial deployments on installer-provisioned infrastructure. 3.12.18. Optional: Configuring managed Secure Boot You can enable managed Secure Boot when deploying an installer-provisioned cluster using Redfish BMC addressing, such as redfish , redfish-virtualmedia , or idrac-virtualmedia . To enable managed Secure Boot, add the bootMode configuration setting to each node: Example hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "/dev/sda" bootMode: UEFISecureBoot 2 1 Ensure the bmc.address setting uses redfish , redfish-virtualmedia , or idrac-virtualmedia as the protocol. See "BMC addressing for HPE iLO" or "BMC addressing for Dell iDRAC" for additional details. 2 The bootMode setting is UEFI by default. Change it to UEFISecureBoot to enable managed Secure Boot. Note See "Configuring nodes" in the "Prerequisites" to ensure the nodes can support managed Secure Boot. If the nodes do not support managed Secure Boot, see "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section. Configuring Secure Boot manually requires Redfish virtual media. Note Red Hat does not support Secure Boot with IPMI, because IPMI does not provide Secure Boot management facilities. 3.13. Manifest configuration files 3.13.1. Creating the OpenShift Container Platform manifests Create the OpenShift Container Platform manifests. USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 3.13.2. Optional: Configuring NTP for disconnected clusters OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. OpenShift Container Platform nodes must agree on a date and time to run properly. When compute nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server. Procedure Install Butane on your installation host by using the following command: USD sudo dnf -y install butane Create a Butane config, 99-master-chrony-conf-override.bu , including the contents of the chrony.conf file for the control plane nodes. Note See "Creating machine configs with Butane" for information about Butane. Butane config example variant: openshift version: 4.16.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all compute nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml , containing the configuration to be delivered to the control plane nodes: USD butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml Create a Butane config, 99-worker-chrony-conf-override.bu , including the contents of the chrony.conf file for the compute nodes that references the NTP servers on the control plane nodes. Butane config example variant: openshift version: 4.16.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml , containing the configuration to be delivered to the worker nodes: USD butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml 3.13.3. Configuring network components to run on the control plane You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the ingressVIP virtual IP address. However, some environments deploy compute nodes in separate subnets from the control plane nodes, which requires configuring the ingressVIP virtual IP address to run on the control plane nodes. Important When deploying remote nodes in separate subnets, you must place the ingressVIP virtual IP address exclusively with the control plane nodes. Procedure Change to the directory storing the install-config.yaml file: USD cd ~/clusterconfigs Switch to the manifests subdirectory: USD cd manifests Create a file named cluster-network-avoid-workers-99-config.yaml : USD touch cluster-network-avoid-workers-99-config.yaml Open the cluster-network-avoid-workers-99-config.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:, This manifest places the ingressVIP virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only: openshift-ingress-operator keepalived Save the cluster-network-avoid-workers-99-config.yaml file. Create a manifests/cluster-ingress-default-ingresscontroller.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: "" Consider backing up the manifests directory. The installer deletes the manifests/ directory when creating the cluster. Modify the cluster-scheduler-02-config.yml manifest to make the control plane nodes schedulable by setting the mastersSchedulable field to true . Control plane nodes are not schedulable by default. For example: Note If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail. 3.13.4. Optional: Deploying routers on compute nodes During installation, the installation program deploys router pods on compute nodes. By default, the installation program installs two router pods. If a deployed cluster requires additional routers to handle external traffic loads destined for services within the OpenShift Container Platform cluster, you can create a yaml file to set an appropriate number of router replicas. Important Deploying a cluster with only one compute node is not supported. While modifying the router replicas will address issues with the degraded state when deploying with one compute node, the cluster loses high availability for the ingress API, which is not suitable for production environments. Note By default, the installation program deploys two routers. If the cluster has no compute nodes, the installation program deploys the two routers on the control plane nodes by default. Procedure Create a router-replicas.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" Note Replace <num-of-router-pods> with an appropriate value. If working with just one compute node, set replicas: to 1 . If working with more than 3 compute nodes, you can increase replicas: from the default value 2 as appropriate. Save and copy the router-replicas.yaml file to the clusterconfigs/openshift directory: USD cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml 3.13.5. Optional: Configuring the BIOS The following procedure configures the BIOS during the installation process. Procedure Create the manifests. Modify the BareMetalHost resource file corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Add the BIOS configuration to the spec section of the BareMetalHost resource: spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true Note Red Hat supports three BIOS configurations. Only servers with BMC type irmc are supported. Other types of servers are currently not supported. Create the cluster. Additional resources Bare metal configuration 3.13.6. Optional: Configuring the RAID The following procedure configures a redundant array of independent disks (RAID) using baseboard management controllers (BMCs) during the installation process. Note If you want to configure a hardware RAID for the node, verify that the node has a supported RAID controller. OpenShift Container Platform 4.16 does not support software RAID. Table 3.8. Hardware RAID support by vendor Vendor BMC and protocol Firmware version RAID levels Fujitsu iRMC N/A 0, 1, 5, 6, and 10 Dell iDRAC with Redfish Version 6.10.30.20 or later 0, 1, and 5 Procedure Create the manifests. Modify the BareMetalHost resource corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Note The following example uses a hardware RAID configuration because OpenShift Container Platform 4.16 does not support software RAID. If you added a specific RAID configuration to the spec section, this causes the node to delete the original RAID configuration in the preparing phase and perform a specified configuration on the RAID. For example: spec: raid: hardwareRAIDVolumes: - level: "0" 1 name: "sda" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0 1 level is a required field, and the others are optional fields. If you added an empty RAID configuration to the spec section, the empty configuration causes the node to delete the original RAID configuration during the preparing phase, but does not perform a new configuration. For example: spec: raid: hardwareRAIDVolumes: [] If you do not add a raid field in the spec section, the original RAID configuration is not deleted, and no new configuration will be performed. Create the cluster. 3.13.7. Optional: Configuring storage on nodes You can make changes to operating systems on OpenShift Container Platform nodes by creating MachineConfig objects that are managed by the Machine Config Operator (MCO). The MachineConfig specification includes an ignition config for configuring the machines at first boot. This config object can be used to modify files, systemd services, and other operating system features running on OpenShift Container Platform machines. Procedure Use the ignition config to configure storage on nodes. The following MachineSet manifest example demonstrates how to add a partition to a device on a primary node. In this example, apply the manifest before installation to have a partition named recovery with a size of 16 GiB on the primary node. Create a custom-partitions.yaml file and include a MachineConfig object that contains your partition layout: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs Save and copy the custom-partitions.yaml file to the clusterconfigs/openshift directory: USD cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift Additional resources Bare metal configuration Partition naming scheme 3.14. Creating a disconnected registry In some cases, you might want to install an OpenShift Container Platform cluster using a local copy of the installation registry. This could be for enhancing network efficiency because the cluster nodes are on a network that does not have access to the internet. A local, or mirrored, copy of the registry requires the following: A certificate for the registry node. This can be a self-signed certificate. A web server that a container on a system will serve. An updated pull secret that contains the certificate and local repository information. Note Creating a disconnected registry on a registry node is optional. If you need to create a disconnected registry on a registry node, you must complete all of the following sub-sections. Prerequisites If you have already prepared a mirror registry for Mirroring images for a disconnected installation , you can skip directly to Modify the install-config.yaml file to use the disconnected registry . 3.14.1. Preparing the registry node to host the mirrored registry The following steps must be completed prior to hosting a mirrored registry on bare metal. Procedure Open the firewall port on the registry node: USD sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent USD sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent USD sudo firewall-cmd --reload Install the required packages for the registry node: USD sudo yum -y install python3 podman httpd httpd-tools jq Create the directory structure where the repository information will be held: USD sudo mkdir -p /opt/registry/{auth,certs,data} 3.14.2. Mirroring the OpenShift Container Platform image repository for a disconnected registry Complete the following steps to mirror the OpenShift Container Platform image repository for a disconnected registry. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. Procedure Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-baremetal-install 3.14.3. Modify the install-config.yaml file to use the disconnected registry On the provisioner node, the install-config.yaml file should use the newly created pull-secret from the pull-secret-update.txt file. The install-config.yaml file must also contain the disconnected registry node's certificate and registry information. Procedure Add the disconnected registry node's certificate to the install-config.yaml file: USD echo "additionalTrustBundle: |" >> install-config.yaml The certificate should follow the "additionalTrustBundle: |" line and be properly indented, usually by two spaces. USD sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml Add the mirror information for the registry to the install-config.yaml file: USD echo "imageContentSources:" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-release" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-v4.0-art-dev" >> install-config.yaml 3.15. Validation checklist for installation ❏ OpenShift Container Platform installer has been retrieved. ❏ OpenShift Container Platform installer has been extracted. ❏ Required parameters for the install-config.yaml have been configured. ❏ The hosts parameter for the install-config.yaml has been configured. ❏ The bmc parameter for the install-config.yaml has been configured. ❏ Conventions for the values configured in the bmc address field have been applied. ❏ Created the OpenShift Container Platform manifests. ❏ (Optional) Deployed routers on compute nodes. ❏ (Optional) Created a disconnected registry. ❏ (Optional) Validate disconnected registry settings if in use. | [
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt <user>",
"sudo systemctl start firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images",
"sudo virsh pool-start default",
"sudo virsh pool-autostart default",
"vim pull-secret.txt",
"chronyc sources",
"MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms",
"ping time.cloudflare.com",
"PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms",
"export PUB_CONN=<baremetal_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal pkill dhclient;dhclient baremetal \"",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr \"x.x.x.x/yy\" ipv4.gateway \"a.a.a.a\" ipv4.dns \"b.b.b.b\" 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal nmcli con up baremetal \"",
"export PROV_CONN=<prov_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPROV_CONN\\\" nmcli con delete \\\"USDPROV_CONN\\\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \\\"USDPROV_CONN\\\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning \"",
"nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2",
"interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false",
"cat <nmstate_configuration>.yaml | base64 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml",
"oc edit mc <machineconfig_custom_resource_name>",
"oc apply -f ./extraworker-secret.yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret",
"oc project openshift-machine-api",
"oc get machinesets",
"oc scale machineset <machineset_name> --replicas=<n> 1",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"192.168.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"192.168.0.0/24 via 192.168.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"10.0.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"10.0.0.0/24 via 10.0.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"ping <remote_node_ip_address>",
"ping <control_plane_node_ip_address>",
"export VERSION=stable-4.16",
"export RELEASE_ARCH=<architecture>",
"export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"sudo dnf install -y podman",
"sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"mkdir /home/kni/rhcos_image_cache",
"sudo semanage fcontext -a -t httpd_sys_content_t \"/home/kni/rhcos_image_cache(/.*)?\"",
"sudo restorecon -Rv /home/kni/rhcos_image_cache/",
"export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk.location')",
"export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/}",
"export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk[\"uncompressed-sha256\"]')",
"curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME}",
"ls -Z /home/kni/rhcos_image_cache",
"podman run -d --name rhcos_image_cache \\ 1 -v /home/kni/rhcos_image_cache:/var/www/html -p 8080:8080/tcp registry.access.redhat.com/ubi9/httpd-24",
"export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d\"/\" -f1)",
"export BOOTSTRAP_OS_IMAGE=\"http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}\"",
"echo \" bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}\"",
"platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"platform: baremetal: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ironic-inspector inspection failed: No disks satisfied root device hints",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfigs",
"ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"export SERVER=<ip_address> 1",
"export SystemID=<system_id> 1",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"On\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"ForceOff\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"pxe\", \"BootSourceOverrideEnabled\": \"Once\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideMode\":\"UEFI\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"cd\", \"BootSourceOverrideEnabled\": \"Once\"}}'",
"curl -u USDUSER:USDPASS -X POST -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{\"Image\": \"https://example.com/test.iso\", \"TransferProtocolType\": \"HTTPS\", \"UserName\": \"\", \"Password\":\"\"}'",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{\"Image\": \"https://example.com/test.iso\", \"TransferProtocolType\": \"HTTPS\", \"UserName\": \"\", \"Password\":\"\"}'",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> disableCertificateVerification: True",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR>",
"noProxy: .example.com,172.22.0.0/24,10.10.0.0/24",
"platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: \"Disabled\" 1",
"machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112",
"networkConfig: nmstate: interfaces: - name: <interface_name> wait-ip: ipv4+ipv6",
"platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6>",
"interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 4 next-hop-interface: <next_hop_nic1_name> 5",
"nmstatectl gc <nmstate_yaml_file>",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 5 next-hop-interface: <next_hop_nic1_name> 6",
"networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes",
"networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4",
"interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"nmstatectl gc <nmstate_yaml_file> 1",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254",
"hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"/dev/sda\" bootMode: UEFISecureBoot 2",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"sudo dnf -y install butane",
"variant: openshift version: 4.16.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all compute nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan",
"butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml",
"variant: openshift version: 4.16.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony",
"butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml",
"cd ~/clusterconfigs",
"cd manifests",
"touch cluster-network-avoid-workers-99-config.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"",
"sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\"",
"cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: raid: hardwareRAIDVolumes: - level: \"0\" 1 name: \"sda\" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0",
"spec: raid: hardwareRAIDVolumes: []",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs",
"cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift",
"sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent",
"sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"sudo yum -y install python3 podman httpd httpd-tools jq",
"sudo mkdir -p /opt/registry/{auth,certs,data}",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-baremetal-install",
"echo \"additionalTrustBundle: |\" >> install-config.yaml",
"sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml",
"echo \"imageContentSources:\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-release\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-v4.0-art-dev\" >> install-config.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-installation-workflow |
Planning, Installation, and Deployment Guide (Common Criteria Edition) | Planning, Installation, and Deployment Guide (Common Criteria Edition) Red Hat Certificate System 10 Red Hat Certificate System 10.4 Common Criteria Edition Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide_common_criteria_edition/index |
2.2. Consistent Multipath Device Names in a Cluster | 2.2. Consistent Multipath Device Names in a Cluster When the user_friendly_names configuration option is set to yes , the name of the multipath device is unique to a node, but it is not guaranteed to be the same on all nodes using the multipath device. This should not cause any difficulties if you use LVM to create logical devices from the multipath device, but if you require that your multipath device names be consistent in every node in the cluster you perform one of the following procedures: Use the alias option in the multipaths section of the multipath configuration file to set the name of the multipath device. The alias for the multipath device is consistent across all the nodes in a cluster. For information on the multipaths section of the multipath configuration file, see see Section 4.4, "Multipaths Device Configuration Attributes" . If you want the system-defined user-friendly names to be consistent across all nodes in the cluster, set up all of the multipath devices on one machine. Then copy the bindings file from that machine to all the other machines in the cluster. The bindings file is located at /var/lib/multipath/bindings by default, but as of RHEL 4.6 and later you can set this value to a different location with the bindings_file parameter of the defaults section of the configuration file. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/dm_multipath/multipath_consistent_names |
10.3.4. Using Remaining Space | 10.3.4. Using Remaining Space You have a swap and a / (root) partition created, and you have selected the root partition to use the remaining space, but it does not fill the hard drive. If your hard drive is more than 1024 cylinders, you must create a /boot partition if you want the / (root) partition to use all of the remaining space on your hard drive. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-trouble-space-x86 |
Chapter 6. Customizing component deployment resources | Chapter 6. Customizing component deployment resources 6.1. Overview of component resource customization You can customize deployment resources that are related to the Red Hat OpenShift AI Operator, for example, CPU and memory limits and requests. For resource customizations to persist without being overwritten by the Operator, the opendatahub.io/managed: true annotation must not be present in the YAML file for the component deployment. This annotation is absent by default. The following table shows the deployment names for each component in the redhat-ods-applications namespace. Important Components denoted with (Technology Preview) in this table are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using Technology Preview features in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Component Deployment names CodeFlare codeflare-operator-manager KServe kserve-controller-manager odh-model-controller Ray kuberay-operator Kueue kueue-controller-manager Workbenches notebook-controller-deployment odh-notebook-controller-manager Dashboard rhods-dashboard Model serving modelmesh-controller odh-model-controller Model registry (Technology Preview) model-registry-operator-controller-manager Data science pipelines data-science-pipelines-operator-controller-manager Training Operator kubeflow-training-operator 6.2. Customizing component resources You can customize component deployment resources by updating the .spec.template.spec.containers.resources section of the YAML file for the component deployment. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift console as a cluster administrator. In the Administrator perspective, click Workloads > Deployments . From the Project drop-down list, select redhat-ods-applications . In the Name column, click the name of the deployment for the component that you want to customize resources for. Note For more information about the deployment names for each component, see Overview of component resource customization . On the Deployment details page that appears, click the YAML tab. Find the .spec.template.spec.containers.resources section. Update the value of the resource that you want to customize. For example, to update the memory limit to 500Mi, make the following change: Click Save . Click Reload . Verification Log in to OpenShift AI and verify that your resource changes apply. 6.3. Disabling component resource customization You can disable customization of component deployment resources, and restore default values, by adding the opendatahub.io/managed: true annotation to the YAML file for the component deployment. Important Manually removing or setting the opendatahub.io/managed: true annotation to false after manually adding it to the YAML file for a component deployment might cause unexpected cluster issues. To remove the annotation from a deployment, use the steps described in Re-enabling component resource customization . Prerequisites You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift console as a cluster administrator. In the Administrator perspective, click Workloads > Deployments . From the Project drop-down list, select redhat-ods-applications . In the Name column, click the name of the deployment for the component to which you want to add the annotation. Note For more information about the deployment names for each component, see Overview of component resource customization . On the Deployment details page that appears, click the YAML tab. Find the metadata.annotations: section. Add the opendatahub.io/managed: true annotation. Click Save . Click Reload . Verification The opendatahub.io/managed: true annotation appears in the YAML file for the component deployment. 6.4. Re-enabling component resource customization You can re-enable customization of component deployment resources after manually disabling it. Important Manually removing or setting the opendatahub.io/managed: annotation to false after adding it to the YAML file for a component deployment might cause unexpected cluster issues. To remove the annotation from a deployment, use the following steps to delete the deployment. The controller pod for the deployment will automatically redeploy with the default settings. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift console as a cluster administrator. In the Administrator perspective, click Workloads > Deployments . From the Project drop-down list, select redhat-ods-applications . In the Name column, click the name of the deployment for the component for which you want to remove the annotation. Click the Options menu . Click Delete Deployment . Verification The controller pod for the deployment automatically redeploys with the default settings. | [
"containers: - resources: limits: cpu: '2' memory: 500Mi requests: cpu: '1' memory: 1Gi",
"metadata: annotations: opendatahub.io/managed: true"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/managing_openshift_ai/customizing-component-deployment-resources_resource-mgmt |
Workloads APIs | Workloads APIs OpenShift Container Platform 4.18 Reference guide for workloads APIs Red Hat OpenShift Documentation Team | [
"\"postCommit\": { \"script\": \"rake test --verbose\", }",
"The above is a convenient form which is equivalent to:",
"\"postCommit\": { \"command\": [\"/bin/sh\", \"-ic\"], \"args\": [\"rake test --verbose\"] }",
"\"postCommit\": { \"commit\": [\"rake\", \"test\", \"--verbose\"] }",
"Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint.",
"\"postCommit\": { \"args\": [\"rake\", \"test\", \"--verbose\"] }",
"This form is only useful if the image entrypoint can handle arguments.",
"\"postCommit\": { \"script\": \"rake test USD1\", \"args\": [\"--verbose\"] }",
"This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be \"/bin/sh\" and USD1, USD2, etc, are the positional arguments from Args.",
"\"postCommit\": { \"command\": [\"rake\", \"test\"], \"args\": [\"--verbose\"] }",
"This form is equivalent to appending the arguments to the Command slice.",
"\"postCommit\": { \"script\": \"rake test --verbose\", }",
"The above is a convenient form which is equivalent to:",
"\"postCommit\": { \"command\": [\"/bin/sh\", \"-ic\"], \"args\": [\"rake test --verbose\"] }",
"\"postCommit\": { \"commit\": [\"rake\", \"test\", \"--verbose\"] }",
"Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint.",
"\"postCommit\": { \"args\": [\"rake\", \"test\", \"--verbose\"] }",
"This form is only useful if the image entrypoint can handle arguments.",
"\"postCommit\": { \"script\": \"rake test USD1\", \"args\": [\"--verbose\"] }",
"This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be \"/bin/sh\" and USD1, USD2, etc, are the positional arguments from Args.",
"\"postCommit\": { \"command\": [\"rake\", \"test\"], \"args\": [\"--verbose\"] }",
"This form is equivalent to appending the arguments to the Command slice."
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/workloads_apis/index |
8.21. corosync | 8.21. corosync 8.21.1. RHBA-2013:1531 - corosync bug fix and enhancement update Updated corosync packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The corosync packages provide the Corosync Cluster Engine and C Application Programming Interfaces (APIs) for Red Hat Enterprise Linux cluster software. Bug Fixes BZ# 854216 When running corosync on a faulty network with the failed_to_recv configuration option set, corosync was very often terminated with a segmentation fault after a cluster node was marked as "failed to receive". This happened because an assert condition was met during a cluster node membership determination. To fix this problem, the underlying code has been modified to ignore the assert if it was triggered by nodes marked as "failed to receive". This is safe because a single node membership is always established in this situation. BZ# 877349 The corosync-notifyd service was not started right after installation because the default configuration of the corosync notifier did not exist. This fix adds the default configuration for this service in the /etc/sysconfig/corosync-notifyd file so that corosync-notifyd can now be started right after installation without any additional configuration. BZ# 880598 Due to a bug in the underlying code, the corosync API could read uninitialized memory, and thus return incorrect values when incrementing or decrementing value of certain objects in the configuration and statistics database. This update modifies the respective code to only read 16 bits of memory instead of 32 bits when returning the [u]int16 type values. The corosync API no longer read uninitialized memory and return correct values. BZ# 881729 Due to a rare race condition in the corosync logging system, corosync could terminate with a segmentation fault after an attempt to dereference a NULL pointer. A pthread mutex lock has been added to a respective formatting variable so that the race condition between log-formatting and log-printing functions is now avoided. BZ# 906432 Previously, corosync did not support IPv6 double colon notation and did not handle correctly closing braces when parsing the corosync.conf file. As a consequence, the totem service failed to start when using IPv6. If the configuration file contained additional closing braces, no error was displayed to inform users why was the configuration file not parsed successfully. This update fixes these parsing bugs so the totem service can now be successfully started, and an error message is displayed if the corosync.conf file contains additional closing braces. BZ# 907894 Due to multiple bugs in the corosync code, either duplicate or no messages were delivered to applications if the corosync service was terminated on multiple cluster nodes. This update applies a series of patches correcting these bugs so that corosync no longer loses or duplicates messages in this scenario. BZ#915490 The corosync-fplay utility could terminate with a segmentation fault or result in unpredictable behavior if the corosync fdata file became corrupted. With this update, corosync-fplay has been modified to detect loops in code and properly validate fdata files. To avoid another cause of fdata corruption, corosync now also prohibits its child processes from logging. As a result of these changes, corosync no longer crashes or becomes unresponsive in this situation. BZ# 915769 If a service section in the corosync.conf file did not contain a service name, corosync either terminated with a segmentation fault or refused to start an unknown service. With this update, corosync now properly verifies the name key and if no service name is found, returns an error message and exits gracefully. BZ# 916227 The corosync service did not correctly handle a situation when it received an exit request (the SIGINT signal) before the service initialization was complete. As a consequence, corosync became unresponsive and ignored all signals, except for SIGKILL. This update adds a semaphore to ensure that corosync exits gracefully in this situation. BZ# 922671 When running applications that used the Corosync inter-process communication (IPC) library, some messages in the dispatch() function were lost or duplicated. With this update, corosync properly verifies return values of the dispatch_put() function, returns the correct remaining bytes in the IPC ring buffer, and ensures that the IPC client is correctly informed about the real number of messages in the ring buffer. Messages in the dispatch() function are no longer lost or duplicated. BZ# 924261 Sometimes, when an attempt to shut down the corosync service using the "corosync-cfgtool -H" command failed and returned the CS_ERR_TRY_AGAIN error code, subsequent shutdown attempts always failed with the CS_ERR_EXISTS error. The corosync-cfgtool utility has been modified to automatically retry the shutdown command, and the Corosync's Cfg library now allows processing of multiple subsequent shutdown calls. The "corosync-cfgtool -H" command now works as expected even on heavily loaded cluster nodes. BZ# 947936 If the uidgid section of the corosync.conf file contained a non-existing user or group, corosync did not display any error. The underlying code has been modified so that corosync now properly verifies values returned by the getpwnam_r system call, and displays an appropriate error message in this situation. BZ# 959184 If an IPC client exited in a specific time frame of the connection handshake, the corosync main process received the SIGPIPE signal and terminated. With this update, the SIGPIPE signal is now correctly handled by the sendto() function and the corosync main process no longer terminates in this situation. BZ# 959189 The corosync process could become unresponsive upon exit, by sending the SIGINT signal or using the corosync-cfgtool utility, if it had open a large number of confdb IPC connections. This update modifies the corosync code to ensure that all IPC connection to the configuration and statistics database are closed upon corosync exit so that corosync exits as expected. Enhancements BZ# 949491 The corosync daemon now detects when the corosync main process was not scheduled for a long time and sends a relevant message to the system log. BZ#956739 In order to improve process of problem detection, output of the corosync-blackbox command now contains time stamps of events. This feature is backward-compatible so that output (fdata) from old versions of corosync is processed correctly. Users of corosync are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/corosync |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/monitoring_satellite_performance/providing-feedback-on-red-hat-documentation_monitoring |
9.4. Hosts and Networking | 9.4. Hosts and Networking 9.4.1. Refreshing Host Capabilities When a network interface card is added to a host, the capabilities of the host must be refreshed to display that network interface card in the Manager. Refreshing Host Capabilities Click Compute Hosts and select a host. Click Management Refresh Capabilities . The list of network interface cards in the Network Interfaces tab for the selected host is updated. Any new network interface cards can now be used in the Manager. 9.4.2. Editing Host Network Interfaces and Assigning Logical Networks to Hosts You can change the settings of physical host network interfaces, move the management network from one physical host network interface to another, and assign logical networks to physical host network interfaces. Bridge and ethtool custom properties are also supported. Warning The only way to change the IP address of a host in Red Hat Virtualization is to remove the host and then to add it again. To change the VLAN settings of a host, see Section 9.4.4, "Editing a Host's VLAN Settings" . Important You cannot assign logical networks offered by external providers to physical host network interfaces; such networks are dynamically assigned to hosts as they are required by virtual machines. Note If the switch has been configured to provide Link Layer Discovery Protocol (LLDP) information, you can hover your cursor over a physical network interface to view the switch port's current configuration. This can help to prevent incorrect configuration. Red Hat recommends checking the following information prior to assigning logical networks: Port Description (TLV type 4) and System Name (TLV type 5) help to detect to which ports and on which switch the host's interfaces are patched. Port VLAN ID shows the native VLAN ID configured on the switch port for untagged ethernet frames. All VLANs configured on the switch port are shown as VLAN Name and VLAN ID combinations. Editing Host Network Interfaces and Assigning Logical Networks to Hosts Click Compute Hosts . Click the host's name to open the details view. Click the Network Interfaces tab. Click Setup Host Networks . Optionally, hover your cursor over host network interface to view configuration information provided by the switch. Attach a logical network to a physical host network interface by selecting and dragging the logical network into the Assigned Logical Networks area to the physical host network interface. Note If a NIC is connected to more than one logical network, only one of the networks can be non-VLAN. All the other logical networks must be unique VLANs. Configure the logical network: Hover your cursor over an assigned logical network and click the pencil icon to open the Edit Management Network window. From the IPv4 tab, select a Boot Protocol from None , DHCP , or Static . If you selected Static , enter the IP , Netmask / Routing Prefix , and the Gateway . Note For IPv6, only static IPv6 addressing is supported. To configure the logical network, select the IPv6 tab and make the following entries: Set Boot Protocol to Static . For Routing Prefix , enter the length of the prefix using a forward slash and decimals. For example: /48 IP : The complete IPv6 address of the host network interface. For example: 2001:db8::1:0:0:6 Gateway : The source router's IPv6 address. For example: 2001:db8::1:0:0:1 Note If you change the host's management network IP address, you must reinstall the host for the new IP address to be configured. Each logical network can have a separate gateway defined from the management network gateway. This ensures traffic that arrives on the logical network will be forwarded using the logical network's gateway instead of the default gateway used by the management network. Important Set all hosts in a cluster to use the same IP stack for their management network; either IPv4 or IPv6 only. Dual stack is not supported. Use the QoS tab to override the default host network quality of service. Select Override QoS and enter the desired values in the following fields: Weighted Share : Signifies how much of the logical link's capacity a specific network should be allocated, relative to the other networks attached to the same logical link. The exact share depends on the sum of shares of all networks on that link. By default this is a number in the range 1-100. Rate Limit [Mbps] : The maximum bandwidth to be used by a network. Committed Rate [Mbps] : The minimum bandwidth required by a network. The Committed Rate requested is not guaranteed and will vary depending on the network infrastructure and the Committed Rate requested by other networks on the same logical link. To configure a network bridge, click the Custom Properties tab and select bridge_opts from the drop-down list. Enter a valid key and value with the following syntax: key = value . Separate multiple entries with a whitespace character. The following keys are valid, with the values provided as examples. For more information on these parameters, see Section B.1, "Explanation of bridge_opts Parameters" . To configure ethernet properties, click the Custom Properties tab and select ethtool_opts from the drop-down list. Enter a valid value using the format of the command-line arguments of ethtool. For example: This field can accept wildcards. For example, to apply the same option to all of this network's interfaces, use: The ethtool_opts option is not available by default; you need to add it using the engine configuration tool. See Section B.2, "How to Set Up Red Hat Virtualization Manager to Use Ethtool" for more information. For more information on ethtool properties, see the manual page by typing man ethtool in the command line. To configure Fibre Channel over Ethernet (FCoE), click the Custom Properties tab and select fcoe from the drop-down list. Enter a valid key and value with the following syntax: key = value . At least enable=yes is required. You can also add dcb= and auto_vlan= [yes|no] . Separate multiple entries with a whitespace character. The fcoe option is not available by default; you need to add it using the engine configuration tool. See Section B.3, "How to Set Up Red Hat Virtualization Manager to Use FCoE" for more information. Note A separate, dedicated logical network is recommended for use with FCoE. To change the default network used by the host from the management network (ovirtmgmt) to a non-management network, configure the non-management network's default route. See Section 9.1.5, "Configuring a Non-Management Logical Network as the Default Route" for more information. If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box. For more information about unsynchronized hosts and how to synchronize them, see Section 9.4.3, "Synchronizing Host Networks" . Select the Verify connectivity between Host and Engine check box to check network connectivity. This action only works if the host is in maintenance mode. Click OK . Note If not all network interface cards for the host are displayed, click Management Refresh Capabilities to update the list of network interface cards available for that host. 9.4.3. Synchronizing Host Networks The Manager defines a network interface as out-of-sync when the definition of the interface on the host differs from the definitions stored by the Manager. Out-of-sync networks appear with an Out-of-sync icon in the host's Network Interfaces tab and with this icon in the Setup Host Networks window. When a host's network is out of sync, the only activities that you can perform on the unsynchronized network in the Setup Host Networks window are detaching the logical network from the network interface or synchronizing the network. Understanding How a Host Becomes out-of-sync A host will become out of sync if: You make configuration changes on the host rather than using the the Edit Logical Networks window, for example: Changing the VLAN identifier on the physical host. Changing the Custom MTU on the physical host. You move a host to a different data center with the same network name, but with different values/parameters. You change a network's VM Network property by manually removing the bridge from the host. Preventing Hosts from Becoming Unsynchronized Following these best practices will prevent your host from becoming unsynchronized: Use the Administration Portal to make changes rather than making changes locally on the host. Edit VLAN settings according to the instructions in Section 9.4.4, "Editing a Host's VLAN Settings" . Synchronizing Hosts Synchronizing a host's network interface definitions involves using the definitions from the Manager and applying them to the host. If these are not the definitions that you require, after synchronizing your hosts update their definitions from the Administration Portal. You can synchronize a host's networks on three levels: Per logical network Per host Per cluster Synchronizing Host Networks on the Logical Network Level Click Compute Hosts . Click the host's name to open the details view. Click the Network Interfaces tab. Click Setup Host Networks . Hover your cursor over the unsynchronized network and click the pencil icon to open the Edit Network window. Select the Sync network check box. Click OK to save the network change. Click OK to close the Setup Host Networks window. Synchronizing a Host's Networks on the Host level Click the Sync All Networks button in the host's Network Interfaces tab to synchronize all of the host's unsynchronized network interfaces. Synchronizing a Host's Networks on the Cluster level Click the Sync All Networks button in the cluster's Logical Networks tab to synchronize all unsynchronized logical network definitions for the entire cluster. Note You can also synchronize a host's networks via the REST API. See syncallnetworks in the REST API Guide . 9.4.4. Editing a Host's VLAN Settings To change the VLAN settings of a host, the host must be removed from the Manager, reconfigured, and re-added to the Manager. To keep networking synchronized, do the following: Put the host in maintenance mode. Manually remove the management network from the host. This will make the host reachable over the new VLAN. Add the host to the cluster. Virtual machines that are not connected directly to the management network can be migrated between hosts safely. The following warning message appears when the VLAN ID of the management network is changed: Proceeding causes all of the hosts in the data center to lose connectivity to the Manager and causes the migration of hosts to the new management network to fail. The management network will be reported as "out-of-sync". Important If you change the management network's VLAN ID, you must reinstall the host to apply the new VLAN ID. 9.4.5. Adding Multiple VLANs to a Single Network Interface Using Logical Networks Multiple VLANs can be added to a single network interface to separate traffic on the one host. Important You must have created more than one logical network, all with the Enable VLAN tagging check box selected in the New Logical Network or Edit Logical Network windows. Adding Multiple VLANs to a Network Interface using Logical Networks Click Compute Hosts . Click the host's name to open the details view. Click the Network Interfaces tab. Click Setup Host Networks . Drag your VLAN-tagged logical networks into the Assigned Logical Networks area to the physical network interface. The physical network interface can have multiple logical networks assigned due to the VLAN tagging. Edit the logical networks: Hover your cursor over an assigned logical network and click the pencil icon. If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box. Select a Boot Protocol : None DHCP Static Provide the IP and Subnet Mask . Click OK . Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode. Click OK . Add the logical network to each host in the cluster by editing a NIC on each host in the cluster. After this is done, the network will become operational. This process can be repeated multiple times, selecting and editing the same network interface each time on each host to add logical networks with different VLAN tags to a single network interface. 9.4.6. Assigning Additional IPv4 Addresses to a Host Network A host network, such as the ovirtmgmt management network, is created with only one IP address when initially set up. This means that if a NIC's configuration file (for example, /etc/sysconfig/network-scripts/ifcfg-eth01 ) is configured with multiple IP addresses, only the first listed IP address will be assigned to the host network. Additional IP addresses may be required if connecting to storage, or to a server on a separate private subnet using the same NIC. The vdsm-hook-extra-ipv4-addrs hook allows you to configure additional IPv4 addresses for host networks. For more information about hooks, see Appendix A, VDSM and Hooks . In the following procedure, the host-specific tasks must be performed on each host for which you want to configure additional IP addresses. Assigning Additional IPv4 Addresses to a Host Network On the host that you want to configure additional IPv4 addresses for, install the VDSM hook package. The package is available by default on Red Hat Virtualization Hosts but needs to be installed on Red Hat Enterprise Linux hosts. On the Manager, run the following command to add the key: Restart the ovirt-engine service: In the Administration Portal, click Compute Hosts . Click the host's name to open the details view. Click the Network Interfaces tab and click Setup Host Networks . Edit the host network interface by hovering the cursor over the assigned logical network and clicking the pencil icon. Select ipv4_addr from the Custom Properties drop-down list and add the additional IP address and prefix (for example 5.5.5.5/24). Multiple IP addresses must be comma-separated. Click OK to close the Edit Network window. Click OK to close the Setup Host Networks window. The additional IP addresses will not be displayed in the Manager, but you can run the command ip addr show on the host to confirm that they have been added. 9.4.7. Adding Network Labels to Host Network Interfaces Using network labels allows you to greatly simplify the administrative workload associated with assigning logical networks to host network interfaces. Setting a label on a role network (for instance, a migration network or a display network) causes a mass deployment of that network on all hosts. Such mass additions of networks are achieved through the use of DHCP. This method of mass deployment was chosen over a method of typing in static addresses, because of the unscalable nature of the task of typing in many static IP addresses. There are two methods of adding labels to a host network interface: Manually, in the Administration Portal Automatically, with the LLDP Labeler service Adding Network Labels in the Administration Portal Click Compute Hosts . Click the host's name to open the details view. Click the Network Interfaces tab. Click Setup Host Networks . Click Labels and right-click [New Label] . Select a physical network interface to label. Enter a name for the network label in the Label text field. Click OK . Adding Network Labels with the LLDP Labeler Service You can automate the process of assigning labels to host network interfaces in the configured list of clusters with the LLDP Labeler service. By default, LLDP Labeler runs as an hourly service. This option is useful if you make hardware changes (for example, NICs, switches, or cables) or change switch configurations. Prerequisites The interfaces must be connected to a Juniper switch. The Juniper switch must be configured to provide the Port VLAN using LLDP. Procedure Configure the username and password in /etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf : username - the username of the Manager administrator. The default is admin@internal . password - the password of the Manager administrator. The default is 123456 . Configure the LLDP Labeler service by updating the following values in etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf : clusters - a comma-separated list of clusters on which the service should run. Wildcards are supported. For example, Cluster* defines LLDP Labeler to run on all clusters starting with word Cluster . To run the service on all clusters in the data center, type * . The default is Def* . api_url - the full URL of the Manager's API. The default is https:// Manager_FQDN /ovirt-engine/api ca_file - the path to the custom CA certificate file. Leave this value empty if you do not use custom certificates. The default is empty. auto_bonding - enables LLDP Labeler's bonding capabilities. The default is true . auto_labeling - enables LLDP Labeler's labeling capabilities. The default is true . Optionally, you can configure the service to run at a different time interval by changing the value of OnUnitActiveSec in etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-labeler.timer . The default is 1h . Configure the service to start now and at boot by entering the following command: To invoke the service manually, enter the following command: You have added a network label to a host network interface. Newly created logical networks with the same label are automatically assigned to all host network interfaces with that label. Removing a label from a logical network automatically removes that logical network from all host network interfaces with that label. 9.4.8. Changing the FQDN of a Host Use the following procedure to change the fully qualified domain name of hosts. Updating the FQDN of a Host Place the host into maintenance mode so the virtual machines are live migrated to another host. See Section 10.5.15, "Moving a Host to Maintenance Mode" for more information. Alternatively, manually shut down or migrate all the virtual machines to another host. See Manually Migrating Virtual Machines in the Virtual Machine Management Guide for more information. Click Remove , and click OK to remove the host from the Administration Portal. Use the hostnamectl tool to update the host name. For more options, see Configure Host Names in the Red Hat Enterprise Linux 7 Networking Guide . Reboot the host. Re-register the host with the Manager. See Section 10.5.1, "Adding Standard Hosts to the Red Hat Virtualization Manager" for more information. 9.4.9. IPv6 Networking Support Red Hat Virtualization supports static IPv6 networking in most contexts. Note Red Hat Virtualization requires IPv6 to remain enabled on the computer or virtual machine where you are running the Manager (also called "the Manager machine"). Do not disable IPv6 on the Manager machine, even if your systems do not use it. Limitations for IPv6 Only static IPv6 addressing is supported. Dynamic IPv6 addressing with DHCP or Stateless Address Autoconfiguration are not supported. Dual-stack addressing, IPv4 and IPv6, is not supported. OVN networking can be used with only IPv4 or IPv6. Switching clusters from IPv4 to IPv6 is not supported. Only a single gateway per host can be set for IPv6. If both networks share a single gateway (are on the same subnet), you can move the default route role from the management network (ovirtmgmt) to another logical network. The host and Manager should have the same IPv6 gateway. If the host and Manager are not on the same subnet, the Manager might lose connectivity with the host because the IPv6 gateway was removed. Using a glusterfs storage domain with an IPv6-addressed gluster server is not supported. 9.4.10. Setting Up and Configuring SR-IOV This topic summarizes the steps for setting up and configuring SR-IOV, with links out to topics that cover each step in detail. 9.4.10.1. Prerequisites Set up your hardware in accordance with the Hardware Considerations for Implementing SR-IOV 9.4.10.2. Set Up and Configure SR-IOV To set up and configure SR-IOV, complete the following tasks. Configuring the host for PCI passthrough Editing the virtual function configuration on a NIC . Enabling passthrough on a vNIC Profile . Configuring Virtual Machines with SR-IOV-Enabled vNICs to Reduce Network Outage during Migration . Notes The number of the 'passthrough' vNICs depends on the number of available virtual functions (VFs) on the host. For example, to run a virtual machine (VM) with three SR-IOV cards (vNICs), the host must have three or more VFs enabled. Hotplug and unplug are supported. Live migration is supported from RHV version 4.1 onward. To migrate a VM, the destination host must also have enough available VFs to receive the VM. During the migration, the VM releases a number of VFs on the source host and occupies the same number of VFs on the destination host. On the host, you will see a device, link, or ifcae like any other interface. That device disappears when it is attached to a VM, and reappears when it is released. Avoid attaching a host device directly to a VM for SR-IOV feature. To use a VF as a trunk port with several VLANs and configure the VLANs within the Guest, please see Cannot configure VLAN on SR-IOV VF interfaces inside the Virtual Machine . Here is an example of what the libvirt XML for the interface would look like: ---- <interface type='hostdev'> <mac address='00:1a:yy:xx:vv:xx'/> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x05' slot='0x10' function='0x0'/> </source> <alias name='ua-18400536-5688-4477-8471-be720e9efc68'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </interface> ---- Troubleshooting The following example shows you how to get diagnostic information about the VFs attached to an interface. 9.4.10.3. Additional Resources How to configure SR-IOV passthrough for RHV VM? How to configure bonding with SR-IOV VF(Virtual Function) in RHV How to enable host device passthrough and SR-IOV to allow assigning dedicated virtual NICs to virtual machines in RHV | [
"forward_delay=1500 gc_timer=3765 group_addr=1:80:c2:0:0:0 group_fwd_mask=0x0 hash_elasticity=4 hash_max=512 hello_time=200 hello_timer=70 max_age=2000 multicast_last_member_count=2 multicast_last_member_interval=100 multicast_membership_interval=26000 multicast_querier=0 multicast_querier_interval=25500 multicast_query_interval=13000 multicast_query_response_interval=1000 multicast_query_use_ifaddr=0 multicast_router=1 multicast_snooping=1 multicast_startup_query_count=2 multicast_startup_query_interval=3125",
"--coalesce em1 rx-usecs 14 sample-interval 3 --offload em2 rx on lro on tso off --change em1 speed 1000 duplex half",
"--coalesce * rx-usecs 14 sample-interval 3",
"Changing certain properties (e.g. VLAN, MTU) of the management network could lead to loss of connectivity to hosts in the data center, if its underlying network infrastructure isn't configured to accommodate the changes. Are you sure you want to proceed?",
"yum install vdsm-hook-extra-ipv4-addrs",
"engine-config -s 'UserDefinedNetworkCustomProperties=ipv4_addrs=.*'",
"systemctl restart ovirt-engine.service",
"systemctl enable --now ovirt-lldp-labeler",
"/usr/bin/python /usr/share/ovirt-lldp-labeler/ovirt_lldp_labeler_cli.py",
"hostnamectl set-hostname NEW_FQDN",
"---- <interface type='hostdev'> <mac address='00:1a:yy:xx:vv:xx'/> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x05' slot='0x10' function='0x0'/> </source> <alias name='ua-18400536-5688-4477-8471-be720e9efc68'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </interface> ----",
"ip -s link show dev enp5s0f0 1: enp5s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT qlen 1000 link/ether 86:e2:ba:c2:50:f0 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 30931671 218401 0 0 0 19165434 TX: bytes packets errors dropped carrier collsns 997136 13661 0 0 0 0 vf 0 MAC 02:00:00:00:00:01, spoof checking on, link-state auto, trust off, query_rss off vf 1 MAC 00:1a:4b:16:01:5e, spoof checking on, link-state auto, trust off, query_rss off vf 2 MAC 02:00:00:00:00:01, spoof checking on, link-state auto, trust off, query_rss off"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-hosts_and_networking |
Chapter 5. Enabling observability for Red Hat Developer Hub on OpenShift Container Platform | Chapter 5. Enabling observability for Red Hat Developer Hub on OpenShift Container Platform In OpenShift Container Platform, metrics are exposed through an HTTP service endpoint under the /metrics canonical name. You can create a ServiceMonitor custom resource (CR) to scrape metrics from a service endpoint in a user-defined project. 5.1. Enabling metrics monitoring in a Helm chart installation on an OpenShift Container Platform cluster You can enable and view metrics for a Red Hat Developer Hub Helm deployment from the Developer perspective of the OpenShift Container Platform web console. Prerequisites Your OpenShift Container Platform cluster has monitoring for user-defined projects enabled. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm chart. Procedure From the Developer perspective in the OpenShift Container Platform web console, select the Topology view. Click the overflow menu of the Red Hat Developer Hub Helm chart, and select Upgrade . On the Upgrade Helm Release page, select the YAML view option in Configure via , then configure the metrics section in the YAML, as shown in the following example: upstream: # ... metrics: serviceMonitor: enabled: true path: /metrics # ... Click Upgrade . Verification From the Developer perspective in the OpenShift Container Platform web console, select the Observe view. Click the Metrics tab to view metrics for Red Hat Developer Hub pods. 5.2. Enabling metrics monitoring in a Red Hat Developer Hub Operator installation on an OpenShift Container Platform cluster You can enable and view metrics for an Operator-installed Red Hat Developer Hub instance from the Developer perspective of the OpenShift Container Platform web console. Prerequisites Your OpenShift Container Platform cluster has monitoring for user-defined projects enabled. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Red Hat Developer Hub Operator. You have installed the OpenShift CLI ( oc ). Procedure Currently, the Red Hat Developer Hub Operator does not support creating a ServiceMonitor custom resource (CR) by default. You must complete the following steps to create a ServiceMonitor CR to scrape metrics from the endpoint. Create the ServiceMonitor CR as a YAML file: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: <custom_resource_name> 1 namespace: <project_name> 2 labels: app.kubernetes.io/instance: <custom_resource_name> app.kubernetes.io/name: backstage spec: namespaceSelector: matchNames: - <project_name> selector: matchLabels: rhdh.redhat.com/app: backstage-<custom_resource_name> endpoints: - port: backend path: '/metrics' 1 Replace <custom_resource_name> with the name of your Red Hat Developer Hub CR. 2 Replace <project_name> with the name of the OpenShift Container Platform project where your Red Hat Developer Hub instance is running. Apply the ServiceMonitor CR by running the following command: oc apply -f <filename> Verification From the Developer perspective in the OpenShift Container Platform web console, select the Observe view. Click the Metrics tab to view metrics for Red Hat Developer Hub pods. 5.3. Additional resources OpenShift Container Platform - Managing metrics | [
"upstream: metrics: serviceMonitor: enabled: true path: /metrics",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: <custom_resource_name> 1 namespace: <project_name> 2 labels: app.kubernetes.io/instance: <custom_resource_name> app.kubernetes.io/name: backstage spec: namespaceSelector: matchNames: - <project_name> selector: matchLabels: rhdh.redhat.com/app: backstage-<custom_resource_name> endpoints: - port: backend path: '/metrics'",
"apply -f <filename>"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/administration_guide_for_red_hat_developer_hub/assembly-rhdh-observability |
Chapter 1. Installing and preparing the Operators | Chapter 1. Installing and preparing the Operators You install the Red Hat OpenStack Services on OpenShift (RHOSO) OpenStack Operator ( openstack-operator ) and create the RHOSO control plane on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. You install the OpenStack Operator by using the RHOCP web console. You perform the control plane installation tasks and all data plane creation tasks on a workstation that has access to the RHOCP cluster. 1.1. Prerequisites An operational RHOCP cluster, version 4.16. For the RHOCP system requirements, see Red Hat OpenShift Container Platform cluster requirements in Planning your deployment . The oc command line tool is installed on your workstation. You are logged in to the RHOCP cluster as a user with cluster-admin privileges. 1.2. Installing the OpenStack Operator You use OperatorHub on the Red Hat OpenShift Container Platform (RHOCP) web console to install the OpenStack Operator ( openstack-operator ) on your RHOCP cluster. Procedure Log in to the RHOCP web console as a user with cluster-admin permissions. Select Operators OperatorHub . In the Filter by keyword field, type OpenStack . Click the OpenStack Operator tile with the Red Hat source label. Read the information about the Operator and click Install . On the Install Operator page, select "Operator recommended Namespace: openstack-operators" from the Installed Namespace list. Click Install to make the Operator available to the openstack-operators namespace. The Operators are deployed and ready when the Status of the OpenStack Operator is Succeeded . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_red_hat_openstack_services_on_openshift/assembly_installing-and-preparing-the-Operators |
Chapter 7. PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1] | Chapter 7. PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1] Description PriorityLevelConfiguration represents the configuration of a priority level. Type object 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PriorityLevelConfigurationSpec specifies the configuration of a priority level. status object PriorityLevelConfigurationStatus represents the current state of a "request-priority". 7.1.1. .spec Description PriorityLevelConfigurationSpec specifies the configuration of a priority level. Type object Required type Property Type Description exempt object ExemptPriorityLevelConfiguration describes the configurable aspects of the handling of exempt requests. In the mandatory exempt configuration object the values in the fields here can be modified by authorized users, unlike the rest of the spec . limited object LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues: - How are requests for this priority level limited? - What should be done with requests that exceed the limit? type string type indicates whether this priority level is subject to limitation on request execution. A value of "Exempt" means that requests of this priority level are not subject to a limit (and thus are never queued) and do not detract from the capacity made available to other priority levels. A value of "Limited" means that (a) requests of this priority level are subject to limits and (b) some of the server's limited capacity is made available exclusively to this priority level. Required. 7.1.2. .spec.exempt Description ExemptPriorityLevelConfiguration describes the configurable aspects of the handling of exempt requests. In the mandatory exempt configuration object the values in the fields here can be modified by authorized users, unlike the rest of the spec . Type object Property Type Description lendablePercent integer lendablePercent prescribes the fraction of the level's NominalCL that can be borrowed by other priority levels. This value of this field must be between 0 and 100, inclusive, and it defaults to 0. The number of seats that other levels can borrow from this level, known as this level's LendableConcurrencyLimit (LendableCL), is defined as follows. LendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 ) nominalConcurrencyShares integer nominalConcurrencyShares (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats nominally reserved for this priority level. This DOES NOT limit the dispatching from this priority level but affects the other priority levels through the borrowing mechanism. The server's concurrency limit (ServerCL) is divided among all the priority levels in proportion to their NCS values: NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[priority level k] NCS(k) Bigger numbers mean a larger nominal concurrency limit, at the expense of every other priority level. This field has a default value of zero. 7.1.3. .spec.limited Description LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues: - How are requests for this priority level limited? - What should be done with requests that exceed the limit? Type object Property Type Description borrowingLimitPercent integer borrowingLimitPercent , if present, configures a limit on how many seats this priority level can borrow from other priority levels. The limit is known as this level's BorrowingConcurrencyLimit (BorrowingCL) and is a limit on the total number of seats that this level may borrow at any one time. This field holds the ratio of that limit to the level's nominal concurrency limit. When this field is non-nil, it must hold a non-negative integer and the limit is calculated as follows. BorrowingCL(i) = round( NominalCL(i) * borrowingLimitPercent(i)/100.0 ) The value of this field can be more than 100, implying that this priority level can borrow a number of seats that is greater than its own nominal concurrency limit (NominalCL). When this field is left nil , the limit is effectively infinite. lendablePercent integer lendablePercent prescribes the fraction of the level's NominalCL that can be borrowed by other priority levels. The value of this field must be between 0 and 100, inclusive, and it defaults to 0. The number of seats that other levels can borrow from this level, known as this level's LendableConcurrencyLimit (LendableCL), is defined as follows. LendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 ) limitResponse object LimitResponse defines how to handle requests that can not be executed right now. nominalConcurrencyShares integer nominalConcurrencyShares (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats available at this priority level. This is used both for requests dispatched from this priority level as well as requests dispatched from other priority levels borrowing seats from this level. The server's concurrency limit (ServerCL) is divided among the Limited priority levels in proportion to their NCS values: NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[priority level k] NCS(k) Bigger numbers mean a larger nominal concurrency limit, at the expense of every other priority level. If not specified, this field defaults to a value of 30. Setting this field to zero supports the construction of a "jail" for this priority level that is used to hold some request(s) 7.1.4. .spec.limited.limitResponse Description LimitResponse defines how to handle requests that can not be executed right now. Type object Required type Property Type Description queuing object QueuingConfiguration holds the configuration parameters for queuing type string type is "Queue" or "Reject". "Queue" means that requests that can not be executed upon arrival are held in a queue until they can be executed or a queuing limit is reached. "Reject" means that requests that can not be executed upon arrival are rejected. Required. 7.1.5. .spec.limited.limitResponse.queuing Description QueuingConfiguration holds the configuration parameters for queuing Type object Property Type Description handSize integer handSize is a small positive number that configures the shuffle sharding of requests into queues. When enqueuing a request at this priority level the request's flow identifier (a string pair) is hashed and the hash value is used to shuffle the list of queues and deal a hand of the size specified here. The request is put into one of the shortest queues in that hand. handSize must be no larger than queues , and should be significantly smaller (so that a few heavy flows do not saturate most of the queues). See the user-facing documentation for more extensive guidance on setting this field. This field has a default value of 8. queueLengthLimit integer queueLengthLimit is the maximum number of requests allowed to be waiting in a given queue of this priority level at a time; excess requests are rejected. This value must be positive. If not specified, it will be defaulted to 50. queues integer queues is the number of queues for this priority level. The queues exist independently at each apiserver. The value must be positive. Setting it to 1 effectively precludes shufflesharding and thus makes the distinguisher method of associated flow schemas irrelevant. This field has a default value of 64. 7.1.6. .status Description PriorityLevelConfigurationStatus represents the current state of a "request-priority". Type object Property Type Description conditions array conditions is the current state of "request-priority". conditions[] object PriorityLevelConfigurationCondition defines the condition of priority level. 7.1.7. .status.conditions Description conditions is the current state of "request-priority". Type array 7.1.8. .status.conditions[] Description PriorityLevelConfigurationCondition defines the condition of priority level. Type object Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another. message string message is a human-readable message indicating details about last transition. reason string reason is a unique, one-word, CamelCase reason for the condition's last transition. status string status is the status of the condition. Can be True, False, Unknown. Required. type string type is the type of the condition. Required. 7.2. API endpoints The following API endpoints are available: /apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations DELETE : delete collection of PriorityLevelConfiguration GET : list or watch objects of kind PriorityLevelConfiguration POST : create a PriorityLevelConfiguration /apis/flowcontrol.apiserver.k8s.io/v1/watch/prioritylevelconfigurations GET : watch individual changes to a list of PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead. /apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations/{name} DELETE : delete a PriorityLevelConfiguration GET : read the specified PriorityLevelConfiguration PATCH : partially update the specified PriorityLevelConfiguration PUT : replace the specified PriorityLevelConfiguration /apis/flowcontrol.apiserver.k8s.io/v1/watch/prioritylevelconfigurations/{name} GET : watch changes to an object of kind PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations/{name}/status GET : read status of the specified PriorityLevelConfiguration PATCH : partially update status of the specified PriorityLevelConfiguration PUT : replace status of the specified PriorityLevelConfiguration 7.2.1. /apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations HTTP method DELETE Description delete collection of PriorityLevelConfiguration Table 7.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PriorityLevelConfiguration Table 7.3. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfigurationList schema 401 - Unauthorized Empty HTTP method POST Description create a PriorityLevelConfiguration Table 7.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.5. Body parameters Parameter Type Description body PriorityLevelConfiguration schema Table 7.6. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 202 - Accepted PriorityLevelConfiguration schema 401 - Unauthorized Empty 7.2.2. /apis/flowcontrol.apiserver.k8s.io/v1/watch/prioritylevelconfigurations HTTP method GET Description watch individual changes to a list of PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead. Table 7.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.3. /apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations/{name} Table 7.8. Global path parameters Parameter Type Description name string name of the PriorityLevelConfiguration HTTP method DELETE Description delete a PriorityLevelConfiguration Table 7.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PriorityLevelConfiguration Table 7.11. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PriorityLevelConfiguration Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PriorityLevelConfiguration Table 7.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.15. Body parameters Parameter Type Description body PriorityLevelConfiguration schema Table 7.16. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty 7.2.4. /apis/flowcontrol.apiserver.k8s.io/v1/watch/prioritylevelconfigurations/{name} Table 7.17. Global path parameters Parameter Type Description name string name of the PriorityLevelConfiguration HTTP method GET Description watch changes to an object of kind PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.5. /apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations/{name}/status Table 7.19. Global path parameters Parameter Type Description name string name of the PriorityLevelConfiguration HTTP method GET Description read status of the specified PriorityLevelConfiguration Table 7.20. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PriorityLevelConfiguration Table 7.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.22. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PriorityLevelConfiguration Table 7.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.24. Body parameters Parameter Type Description body PriorityLevelConfiguration schema Table 7.25. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/schedule_and_quota_apis/prioritylevelconfiguration-flowcontrol-apiserver-k8s-io-v1 |
Chapter 40. Removing Red Hat Enterprise Linux from IBM System z | Chapter 40. Removing Red Hat Enterprise Linux from IBM System z If you want to delete the existing operating system data, first, if any Linux disks contain sensitive data, ensure that you destroy the data according to your security policy. To proceed you can consider these options: Overwrite the disks with a new installation. Start a new installation and use the partitioning dialog (refer to Section 23.13, "Disk Partitioning Setup" ) to format the partitions where Linux was installed. After the Write changes to disk dialog described in Section 23.16, "Write Changes to Disk" , exit the installer. Make the DASD or SCSI disk where Linux was installed visible from another system, then delete the data. However, this might require special privileges. Ask your system administrator for advice. You can use Linux commands such as dasdfmt (DASD only), parted , mke2fs or dd . For more details about the commands, refer to the respective man pages. 40.1. Running a Different Operating System on your z/VM Guest or LPAR If you want to boot from a DASD or SCSI disk different from where the currently installed system resides under a z/VM guest virtual machine or an LPAR, shut down the Red Hat Enterprise Linux installed and use the desired disk, where another Linux instance is installed, to boot from. This leaves the contents of the installed system unchanged. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-uninstall |
8.4.2.2. cpio | 8.4.2.2. cpio The cpio utility is another traditional UNIX program. It is an excellent general-purpose program for moving data from one place to another and, as such, can serve well as a backup program. The behavior of cpio is a bit different from tar . Unlike tar , cpio reads the names of the files it is to process via standard input. A common method of generating a list of files for cpio is to use programs such as find whose output is then piped to cpio : This command creates a cpio archive file (containing the everything in /home/ ) called home-backup.cpio and residing in the /mnt/backup/ directory. Note Because find has a rich set of file selection tests, sophisticated backups can easily be created. For example, the following command performs a backup of only those files that have not been accessed within the past year: There are many other options to cpio (and find ); to learn more about them read the cpio(1) and find(1) man pages. | [
"find /home/ | cpio -o > /mnt/backup/home-backup.cpio",
"find /home/ -atime +365 | cpio -o > /mnt/backup/home-backup.cpio"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-disaster-backups-tech-cpio |
Chapter 5. Known issues | Chapter 5. Known issues There are no known issues for this release. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_1_release_notes/known_issues |
B.65. pango | B.65. pango B.65.1. RHSA-2011:0180 - Moderate: pango security update Updated pango and evolution28-pango packages that fix one security issue are now available for Red Hat Enterprise Linux 4, 5, and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Pango is a library used for the layout and rendering of internationalized text. CVE-2011-0020 An input sanitization flaw, leading to a heap-based buffer overflow, was found in the way Pango displayed font files when using the FreeType font engine back end. If a user loaded a malformed font file with an application that uses Pango, it could cause the application to crash or, possibly, execute arbitrary code with the privileges of the user running the application. Users of pango and evolution28-pango are advised to upgrade to these updated packages, which contain a backported patch to resolve this issue. After installing the updated packages, you must restart your system or restart your X session for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/pango |
Chapter 49. Storage | Chapter 49. Storage Multi-queue I/O scheduling for SCSI Red Hat Enterprise Linux 7 includes a new multiple-queue I/O scheduling mechanism for block devices known as blk-mq. The scsi-mq package allows the Small Computer System Interface (SCSI) subsystem to make use of this new queuing mechanism. This functionality is provided as a Technology Preview and is not enabled by default. To enable it, add scsi_mod.use_blk_mq=Y to the kernel command line. Although blk-mq is intended to offer improved performance, particularly for low-latency devices, it is not guaranteed to always provide better performance. In particular, in some cases, enabling scsi-mq can result in significantly worse performance, especially on systems with many CPUs. (BZ#1109348) Targetd plug-in from the libStorageMgmt API Since Red Hat Enterprise Linux 7.1, storage array management with libStorageMgmt, a storage array independent API, has been fully supported. The provided API is stable, consistent, and allows developers to programmatically manage different storage arrays and utilize the hardware-accelerated features provided. System administrators can also use libStorageMgmt to manually configure storage and to automate storage management tasks with the included command-line interface. The Targetd plug-in is not fully supported and remains a Technology Preview. (BZ#1119909) Support for Data Integrity Field/Data Integrity Extension (DIF/DIX) DIF/DIX is a new addition to the SCSI Standard. It is fully supported in Red Hat Enterprise Linux 7 for the HBAs and storage arrays specified in the Features chapter, but it remains in Technology Preview for all other HBAs and storage arrays. DIF/DIX increases the size of the commonly used 512 byte disk block from 512 to 520 bytes, adding the Data Integrity Field (DIF). The DIF stores a checksum value for the data block that is calculated by the Host Bus Adapter (HBA) when a write occurs. The storage device then confirms the checksum on receipt, and stores both the data and the checksum. Conversely, when a read occurs, the checksum can be verified by the storage device, and by the receiving HBA. (BZ#1072107) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/technology_previews_storage |
15.2.6. Querying | 15.2.6. Querying Use the rpm -q command to query the database of installed packages. The rpm -q foo command displays the package name, version, and release number of the installed package foo : Note To query a package, replace foo with the actual package name. Instead of specifying the package name, use the following options with -q to specify the package(s) you want to query. These are called Package Selection Options . -a queries all currently installed packages. -f <file> queries the package which owns <file> . When specifying a file, you must specify the full path of the file (for example, /bin/ls ). -p <packagefile> queries the package <packagefile> . There are a number of ways to specify what information to display about queried packages. The following options are used to select the type of information for which you are searching. These are called Information Query Options . -i displays package information including name, description, release, size, build date, install date, vendor, and other miscellaneous information. -l displays the list of files that the package contains. -s displays the state of all the files in the package. -d displays a list of files marked as documentation (man pages, info pages, READMEs, etc.). -c displays a list of files marked as configuration files. These are the files you change after installation to adapt the package to your system (for example, sendmail.cf , passwd , inittab , etc.). For the options that display lists of files, add -v to the command to display the lists in a familiar ls -l format. | [
"foo-2.0-1"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Using_RPM-Querying |
4.3. Volume Group Administration | 4.3. Volume Group Administration This section describes the commands that perform the various aspects of volume group administration. 4.3.1. Creating Volume Groups To create a volume group from one or more physical volumes, use the vgcreate command. The vgcreate command creates a new volume group by name and adds at least one physical volume to it. The following command creates a volume group named vg1 that contains physical volumes /dev/sdd1 and /dev/sde1 . When physical volumes are used to create a volume group, its disk space is divided into 4MB extents, by default. This extent is the minimum amount by which the logical volume may be increased or decreased in size. Large numbers of extents will have no impact on I/O performance of the logical volume. You can specify the extent size with the -s option to the vgcreate command if the default extent size is not suitable. You can put limits on the number of physical or logical volumes the volume group can have by using the -p and -l arguments of the vgcreate command. By default, a volume group allocates physical extents according to common-sense rules such as not placing parallel stripes on the same physical volume. This is the normal allocation policy. You can use the --alloc argument of the vgcreate command to specify an allocation policy of contiguous , anywhere , or cling . In general, allocation policies other than normal are required only in special cases where you need to specify unusual or nonstandard extent allocation. For further information on how LVM allocates physical extents, see Section 4.3.2, "LVM Allocation" . LVM volume groups and underlying logical volumes are included in the device special file directory tree in the /dev directory with the following layout: For example, if you create two volume groups myvg1 and myvg2 , each with three logical volumes named lv01 , lv02 , and lv03 , this creates six device special files: The device special files are not present if the corresponding logical volume is not currently active. The maximum device size with LVM is 8 Exabytes on 64-bit CPUs. 4.3.2. LVM Allocation When an LVM operation needs to allocate physical extents for one or more logical volumes, the allocation proceeds as follows: The complete set of unallocated physical extents in the volume group is generated for consideration. If you supply any ranges of physical extents at the end of the command line, only unallocated physical extents within those ranges on the specified physical volumes are considered. Each allocation policy is tried in turn, starting with the strictest policy ( contiguous ) and ending with the allocation policy specified using the --alloc option or set as the default for the particular logical volume or volume group. For each policy, working from the lowest-numbered logical extent of the empty logical volume space that needs to be filled, as much space as possible is allocated, according to the restrictions imposed by the allocation policy. If more space is needed, LVM moves on to the policy. The allocation policy restrictions are as follows: An allocation policy of contiguous requires that the physical location of any logical extent that is not the first logical extent of a logical volume is adjacent to the physical location of the logical extent immediately preceding it. When a logical volume is striped or mirrored, the contiguous allocation restriction is applied independently to each stripe or mirror image (leg) that needs space. An allocation policy of cling requires that the physical volume used for any logical extent be added to an existing logical volume that is already in use by at least one logical extent earlier in that logical volume. If the configuration parameter allocation/cling_tag_list is defined, then two physical volumes are considered to match if any of the listed tags is present on both physical volumes. This allows groups of physical volumes with similar properties (such as their physical location) to be tagged and treated as equivalent for allocation purposes. For more information on using the cling policy in conjunction with LVM tags to specify which additional physical volumes to use when extending an LVM volume, see Section 4.4.19, "Extending a Logical Volume with the cling Allocation Policy" . When a Logical Volume is striped or mirrored, the cling allocation restriction is applied independently to each stripe or mirror image (leg) that needs space. An allocation policy of normal will not choose a physical extent that shares the same physical volume as a logical extent already allocated to a parallel logical volume (that is, a different stripe or mirror image/leg) at the same offset within that parallel logical volume. When allocating a mirror log at the same time as logical volumes to hold the mirror data, an allocation policy of normal will first try to select different physical volumes for the log and the data. If that is not possible and the allocation/mirror_logs_require_separate_pvs configuration parameter is set to 0, it will then allow the log to share physical volume(s) with part of the data. Similarly, when allocating thin pool metadata, an allocation policy of normal will follow the same considerations as for allocation of a mirror log, based on the value of the allocation/thin_pool_metadata_require_separate_pvs configuration parameter. If there are sufficient free extents to satisfy an allocation request but a normal allocation policy would not use them, the anywhere allocation policy will, even if that reduces performance by placing two stripes on the same physical volume. The allocation policies can be changed using the vgchange command. Note If you rely upon any layout behavior beyond that documented in this section according to the defined allocation policies, you should note that this might change in future versions of the code. For example, if you supply on the command line two empty physical volumes that have an identical number of free physical extents available for allocation, LVM currently considers using each of them in the order they are listed; there is no guarantee that future releases will maintain that property. If it is important to obtain a specific layout for a particular Logical Volume, then you should build it up through a sequence of lvcreate and lvconvert steps such that the allocation policies applied to each step leave LVM no discretion over the layout. To view the way the allocation process currently works in any specific case, you can read the debug logging output, for example by adding the -vvvv option to a command. 4.3.3. Creating Volume Groups in a Cluster You create CLVM volume groups in a cluster environment with the vgcreate command, just as you create them on a single node. Note In Red Hat Enterprise Linux 7, clusters are managed through Pacemaker. Clustered LVM logical volumes are supported only in conjunction with Pacemaker clusters, and must be configured as cluster resources. For general information on configuring LVM volumes in a cluster, see Section 1.4, "LVM Logical Volumes in a Red Hat High Availability Cluster" . Volume groups that are shared by members of the cluster should be created with the clustered attribute set with the vgcreate -cy or vgchange -cy command. The clustered attribute is set automatically if if CLVMD is running. This clustered attribute signals that this volume group should be managed and protected by CLVMD. When creating any volume group that is not shared by the cluster and should only be visible to a single host, this clustered attribute should be disabled with the vgcreate -cn or vgchange -cn command. By default, volume groups created with with the clustered attribute on shared storage are visible to all computers that have access to the shared storage. It is possible, however, to create volume groups that are local, visible only to one node in the cluster, by using the -cn option of the vgcreate command. The following command, when executed in a cluster environment, creates a volume group that is local to the node from which the command was executed. The command creates a local volume named vg1 that contains physical volumes /dev/sdd1 and /dev/sde1 . You can change whether an existing volume group is local or clustered with the -c option of the vgchange command, which is described in Section 4.3.9, "Changing the Parameters of a Volume Group" . You can check whether an existing volume group is a clustered volume group with the vgs command, which displays the c attribute if the volume is clustered. The following command displays the attributes of the volume groups VolGroup00 and testvg1 . In this example, VolGroup00 is not clustered, while testvg1 is clustered, as indicated by the c attribute under the Attr heading. For more information on the vgs command, see Section 4.3.5, "Displaying Volume Groups" Section 4.8, "Customized Reporting for LVM" , and the vgs man page. 4.3.4. Adding Physical Volumes to a Volume Group To add additional physical volumes to an existing volume group, use the vgextend command. The vgextend command increases a volume group's capacity by adding one or more free physical volumes. The following command adds the physical volume /dev/sdf1 to the volume group vg1 . 4.3.5. Displaying Volume Groups There are two commands you can use to display properties of LVM volume groups: vgs and vgdisplay . The vgscan command, which scans all the disks for volume groups and rebuilds the LVM cache file, also displays the volume groups. For information on the vgscan command, see Section 4.3.6, "Scanning Disks for Volume Groups to Build the Cache File" . The vgs command provides volume group information in a configurable form, displaying one line per volume group. The vgs command provides a great deal of format control, and is useful for scripting. For information on using the vgs command to customize your output, see Section 4.8, "Customized Reporting for LVM" . The vgdisplay command displays volume group properties (such as size, extents, number of physical volumes, and so on) in a fixed form. The following example shows the output of the vgdisplay command for the volume group new_vg . If you do not specify a volume group, all existing volume groups are displayed. 4.3.6. Scanning Disks for Volume Groups to Build the Cache File The vgscan command scans all supported disk devices in the system looking for LVM physical volumes and volume groups. This builds the LVM cache file in the /etc/lvm/cache/.cache file, which maintains a listing of current LVM devices. LVM runs the vgscan command automatically at system startup and at other times during LVM operation, such as when you execute the vgcreate command or when LVM detects an inconsistency. Note You may need to run the vgscan command manually when you change your hardware configuration and add or delete a device from a node, causing new devices to be visible to the system that were not present at system bootup. This may be necessary, for example, when you add new disks to the system on a SAN or hotplug a new disk that has been labeled as a physical volume. You can define a filter in the /etc/lvm/lvm.conf file to restrict the scan to avoid specific devices. For information on using filters to control which devices are scanned, see Section 4.5, "Controlling LVM Device Scans with Filters" . The following example shows the output of the vgscan command. 4.3.7. Removing Physical Volumes from a Volume Group To remove unused physical volumes from a volume group, use the vgreduce command. The vgreduce command shrinks a volume group's capacity by removing one or more empty physical volumes. This frees those physical volumes to be used in different volume groups or to be removed from the system. Before removing a physical volume from a volume group, you can make sure that the physical volume is not used by any logical volumes by using the pvdisplay command. If the physical volume is still being used you will have to migrate the data to another physical volume using the pvmove command. Then use the vgreduce command to remove the physical volume. The following command removes the physical volume /dev/hda1 from the volume group my_volume_group . If a logical volume contains a physical volume that fails, you cannot use that logical volume. To remove missing physical volumes from a volume group, you can use the --removemissing parameter of the vgreduce command, if there are no logical volumes that are allocated on the missing physical volumes. If the physical volume that fails contains a mirror image of a logical volume of a mirror segment type, you can remove that image from the mirror with the vgreduce --removemissing --mirrorsonly --force command. This removes only the logical volumes that are mirror images from the physical volume. For information on recovering from LVM mirror failure, see Section 6.2, "Recovering from LVM Mirror Failure" . For information on removing lost physical volumes from a volume group, see Section 6.5, "Removing Lost Physical Volumes from a Volume Group" 4.3.8. Activating and Deactivating Volume Groups When you create a volume group it is, by default, activated. This means that the logical volumes in that group are accessible and subject to change. There are various circumstances for which you need to make a volume group inactive and thus unknown to the kernel. To deactivate or activate a volume group, use the -a ( --available ) argument of the vgchange command. The following example deactivates the volume group my_volume_group . If clustered locking is enabled, add 'e' to activate or deactivate a volume group exclusively on one node or 'l' to activate or/deactivate a volume group only on the local node. Logical volumes with single-host snapshots are always activated exclusively because they can only be used on one node at once. You can deactivate individual logical volumes with the lvchange command, as described in Section 4.4.11, "Changing the Parameters of a Logical Volume Group" , For information on activating logical volumes on individual nodes in a cluster, see Section 4.7, "Activating Logical Volumes on Individual Nodes in a Cluster" . 4.3.9. Changing the Parameters of a Volume Group The vgchange command is used to deactivate and activate volume groups, as described in Section 4.3.8, "Activating and Deactivating Volume Groups" . You can also use this command to change several volume group parameters for an existing volume group. The following command changes the maximum number of logical volumes of volume group vg00 to 128. For a description of the volume group parameters you can change with the vgchange command, see the vgchange (8) man page. 4.3.10. Removing Volume Groups To remove a volume group that contains no logical volumes, use the vgremove command. 4.3.11. Splitting a Volume Group To split the physical volumes of a volume group and create a new volume group, use the vgsplit command. Logical volumes cannot be split between volume groups. Each existing logical volume must be entirely on the physical volumes forming either the old or the new volume group. If necessary, however, you can use the pvmove command to force the split. The following example splits the new volume group smallvg from the original volume group bigvg . 4.3.12. Combining Volume Groups To combine two volume groups into a single volume group, use the vgmerge command. You can merge an inactive "source" volume with an active or an inactive "destination" volume if the physical extent sizes of the volume are equal and the physical and logical volume summaries of both volume groups fit into the destination volume groups limits. The following command merges the inactive volume group my_vg into the active or inactive volume group databases giving verbose runtime information. 4.3.13. Backing Up Volume Group Metadata Metadata backups and archives are automatically created on every configuration change to a volume group or logical volume unless disabled in the lvm.conf file. By default, the metadata backup is stored in the /etc/lvm/backup file and the metadata archives are stored in the /etc/lvm/archive file. You can manually back up the metadata to the /etc/lvm/backup file with the vgcfgbackup command. The vgcfgrestore command restores the metadata of a volume group from the archive to all the physical volumes in the volume groups. For an example of using the vgcfgrestore command to recover physical volume metadata, see Section 6.3, "Recovering Physical Volume Metadata" . 4.3.14. Renaming a Volume Group Use the vgrename command to rename an existing volume group. Either of the following commands renames the existing volume group vg02 to my_volume_group 4.3.15. Moving a Volume Group to Another System You can move an entire LVM volume group to another system. It is recommended that you use the vgexport and vgimport commands when you do this. Note You can use the --force argument of the vgimport command. This allows you to import volume groups that are missing physical volumes and subsequently run the vgreduce --removemissing command. The vgexport command makes an inactive volume group inaccessible to the system, which allows you to detach its physical volumes. The vgimport command makes a volume group accessible to a machine again after the vgexport command has made it inactive. To move a volume group from one system to another, perform the following steps: Make sure that no users are accessing files on the active volumes in the volume group, then unmount the logical volumes. Use the -a n argument of the vgchange command to mark the volume group as inactive, which prevents any further activity on the volume group. Use the vgexport command to export the volume group. This prevents it from being accessed by the system from which you are removing it. After you export the volume group, the physical volume will show up as being in an exported volume group when you execute the pvscan command, as in the following example. When the system is shut down, you can unplug the disks that constitute the volume group and connect them to the new system. When the disks are plugged into the new system, use the vgimport command to import the volume group, making it accessible to the new system. Activate the volume group with the -a y argument of the vgchange command. Mount the file system to make it available for use. 4.3.16. Recreating a Volume Group Directory To recreate a volume group directory and logical volume special files, use the vgmknodes command. This command checks the LVM2 special files in the /dev directory that are needed for active logical volumes. It creates any special files that are missing and removes unused ones. You can incorporate the vgmknodes command into the vgscan command by specifying the mknodes argument to the vgscan command. | [
"vgcreate vg1 /dev/sdd1 /dev/sde1",
"/dev/ vg / lv /",
"/dev/myvg1/lv01 /dev/myvg1/lv02 /dev/myvg1/lv03 /dev/myvg2/lv01 /dev/myvg2/lv02 /dev/myvg2/lv03",
"vgcreate -c n vg1 /dev/sdd1 /dev/sde1",
"vgs VG #PV #LV #SN Attr VSize VFree VolGroup00 1 2 0 wz--n- 19.88G 0 testvg1 1 1 0 wz--nc 46.00G 8.00M",
"vgextend vg1 /dev/sdf1",
"vgdisplay new_vg --- Volume group --- VG Name new_vg System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 11 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 3 Act PV 3 VG Size 51.42 GB PE Size 4.00 MB Total PE 13164 Alloc PE / Size 13 / 52.00 MB Free PE / Size 13151 / 51.37 GB VG UUID jxQJ0a-ZKk0-OpMO-0118-nlwO-wwqd-fD5D32",
"vgscan Reading all physical volumes. This may take a while Found volume group \"new_vg\" using metadata type lvm2 Found volume group \"officevg\" using metadata type lvm2",
"pvdisplay /dev/hda1 -- Physical volume --- PV Name /dev/hda1 VG Name myvg PV Size 1.95 GB / NOT usable 4 MB [LVM: 122 KB] PV# 1 PV Status available Allocatable yes (but full) Cur LV 1 PE Size (KByte) 4096 Total PE 499 Free PE 0 Allocated PE 499 PV UUID Sd44tK-9IRw-SrMC-MOkn-76iP-iftz-OVSen7",
"vgreduce my_volume_group /dev/hda1",
"vgchange -a n my_volume_group",
"vgchange -l 128 /dev/vg00",
"vgremove officevg Volume group \"officevg\" successfully removed",
"vgsplit bigvg smallvg /dev/ram15 Volume group \"smallvg\" successfully split from \"bigvg\"",
"vgmerge -v databases my_vg",
"vgrename /dev/vg02 /dev/my_volume_group",
"vgrename vg02 my_volume_group",
"pvscan PV /dev/sda1 is in exported VG myvg [17.15 GB / 7.15 GB free] PV /dev/sdc1 is in exported VG myvg [17.15 GB / 15.15 GB free] PV /dev/sdd1 is in exported VG myvg [17.15 GB / 15.15 GB free]"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/VG_admin |
Chapter 12. Resource Management | Chapter 12. Resource Management Control Groups Red Hat Enterprise Linux 7 features control groups, cgroups, which is a concept for organizing processes in a tree of named groups for the purpose of resource management. They provide a way to hierarchically group and label processes and a way to apply resource limits to these groups. In Red Hat Enterprise Linux 7, control groups are exclusively managed through systemd . Control groups are configured in systemd unit files and are managed with systemd's command line interface (CLI) tools. Control groups and other resource management features are discussed in detail in the Resource Management and Linux Containers Guide . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-resource_management |
A.5. tuned-adm | A.5. tuned-adm tuned-adm is a command line tool that enables you to switch between Tuned profiles to improve performance in a number of specific use cases. It also provides the tuned-adm recommend sub-command that assesses your system and outputs a recommended tuning profile. As of Red Hat Enterprise Linux 7, Tuned includes the ability to run any shell command as part of enabling or disabling a tuning profile. This allows you to extend Tuned profiles with functionality that has not been integrated into Tuned yet. Red Hat Enterprise Linux 7 also provides the include parameter in profile definition files, allowing you to base your own Tuned profiles on existing profiles. The following tuning profiles are provided with Tuned and are supported in Red Hat Enterprise Linux 7. throughput-performance A server profile focused on improving throughput. This is the default profile, and is recommended for most systems. This profile favors performance over power savings by setting intel_pstate and min_perf_pct=100 . It enables transparent huge pages and uses cpupower to set the performance cpufreq governor. It also sets kernel.sched_min_granularity_ns to 10 ms, kernel.sched_wakeup_granularity_ns to 15 ms, and vm.dirty_ratio to 40 %. latency-performance A server profile focused on lowering latency. This profile is recommended for latency-sensitive workloads that benefit from c-state tuning and the increased TLB efficiency of transparent huge pages. This profile favors performance over power savings by setting intel_pstate and max_perf_pct=100 . It enables transparent huge pages, uses cpupower to set the performance cpufreq governor, and requests a cpu_dma_latency value of 1 . network-latency A server profile focused on lowering network latency. This profile favors performance over power savings by setting intel_pstate and min_perf_pct=100 . It disables transparent huge pages, and automatic NUMA balancing. It also uses cpupower to set the performance cpufreq governor, and requests a cpu_dma_latency value of 1 . It also sets busy_read and busy_poll times to 50 ms, and tcp_fastopen to 3 . network-throughput A server profile focused on improving network throughput. This profile favors performance over power savings by setting intel_pstate and max_perf_pct=100 and increasing kernel network buffer sizes. It enables transparent huge pages, and uses cpupower to set the performance cpufreq governor. It also sets kernel.sched_min_granularity_ns to 10 ms, kernel.sched_wakeup_granularity_ns to 15 ms, and vm.dirty_ratio to 40 %. virtual-guest A profile focused on optimizing performance in Red Hat Enterprise Linux 7 virtual machines as well as VMware guests. This profile favors performance over power savings by setting intel_pstate and max_perf_pct=100 . It also decreases the swappiness of virtual memory. It enables transparent huge pages, and uses cpupower to set the performance cpufreq governor. It also sets kernel.sched_min_granularity_ns to 10 ms, kernel.sched_wakeup_granularity_ns to 15 ms, and vm.dirty_ratio to 40 %. virtual-host A profile focused on optimizing performance in Red Hat Enterprise Linux 7 virtualization hosts. This profile favors performance over power savings by setting intel_pstate and max_perf_pct=100 . It also decreases the swappiness of virtual memory. This profile enables transparent huge pages and writes dirty pages back to disk more frequently. It uses cpupower to set the performance cpufreq governor. It also sets kernel.sched_min_granularity_ns to 10 ms, kernel.sched_wakeup_granularity_ns to 15 ms, kernel.sched_migration_cost to 5 ms, and vm.dirty_ratio to 40 %. cpu-partitioning The cpu-partitioning profile partitions the system CPUs into isolated and housekeeping CPUs. To reduce jitter and interruptions on an isolated CPU, the profile clears the isolated CPU from user-space processes, movable kernel threads, interrupt handlers, and kernel timers. A housekeeping CPU can run all services, shell processes, and kernel threads. You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file. The configuration options are: isolated_cores= cpu-list Lists CPUs to isolate. The list of isolated CPUs is comma-separated or the user can specify the range. You can specify a range using a dash, such as 3-5 . This option is mandatory. Any CPU missing from this list is automatically considered a housekeeping CPU. no_balance_cores= cpu-list Lists CPUs which are not considered by the kernel during system wide process load-balancing. This option is optional. This is usually the same list as isolated_cores . For more information on cpu-partitioning , see the tuned-profiles-cpu-partitioning (7) man page. For detailed information about the power saving profiles provided with tuned-adm, see the Red Hat Enterprise Linux 7 Power Management Guide . For detailed information about using tuned-adm , see the man page: | [
"man tuned-adm"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-tuned_adm |
Chapter 2. ClusterRoleBinding [rbac.authorization.k8s.io/v1] | Chapter 2. ClusterRoleBinding [rbac.authorization.k8s.io/v1] Description ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject. Type object Required roleRef 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. roleRef object RoleRef contains information that points to the role being used subjects array Subjects holds references to the objects the role applies to. subjects[] object Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. 2.1.1. .roleRef Description RoleRef contains information that points to the role being used Type object Required apiGroup kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.2. .subjects Description Subjects holds references to the objects the role applies to. Type array 2.1.3. .subjects[] Description Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. Type object Required kind name Property Type Description apiGroup string APIGroup holds the API group of the referenced subject. Defaults to "" for ServiceAccount subjects. Defaults to "rbac.authorization.k8s.io" for User and Group subjects. kind string Kind of object being referenced. Values defined by this API group are "User", "Group", and "ServiceAccount". If the Authorizer does not recognized the kind value, the Authorizer should report an error. name string Name of the object being referenced. namespace string Namespace of the referenced object. If the object kind is non-namespace, such as "User" or "Group", and this value is not empty the Authorizer should report an error. 2.2. API endpoints The following API endpoints are available: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings DELETE : delete collection of ClusterRoleBinding GET : list or watch objects of kind ClusterRoleBinding POST : create a ClusterRoleBinding /apis/rbac.authorization.k8s.io/v1/watch/clusterrolebindings GET : watch individual changes to a list of ClusterRoleBinding. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/{name} DELETE : delete a ClusterRoleBinding GET : read the specified ClusterRoleBinding PATCH : partially update the specified ClusterRoleBinding PUT : replace the specified ClusterRoleBinding /apis/rbac.authorization.k8s.io/v1/watch/clusterrolebindings/{name} GET : watch changes to an object of kind ClusterRoleBinding. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/rbac.authorization.k8s.io/v1/clusterrolebindings HTTP method DELETE Description delete collection of ClusterRoleBinding Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ClusterRoleBinding Table 2.3. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBindingList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterRoleBinding Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body ClusterRoleBinding schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 201 - Created ClusterRoleBinding schema 202 - Accepted ClusterRoleBinding schema 401 - Unauthorized Empty 2.2.2. /apis/rbac.authorization.k8s.io/v1/watch/clusterrolebindings HTTP method GET Description watch individual changes to a list of ClusterRoleBinding. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the ClusterRoleBinding HTTP method DELETE Description delete a ClusterRoleBinding Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterRoleBinding Table 2.11. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterRoleBinding Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 201 - Created ClusterRoleBinding schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterRoleBinding Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body ClusterRoleBinding schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 201 - Created ClusterRoleBinding schema 401 - Unauthorized Empty 2.2.4. /apis/rbac.authorization.k8s.io/v1/watch/clusterrolebindings/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the ClusterRoleBinding HTTP method GET Description watch changes to an object of kind ClusterRoleBinding. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/rbac_apis/clusterrolebinding-rbac-authorization-k8s-io-v1 |
Security and Hardening Guide | Security and Hardening Guide Red Hat OpenStack Platform 16.0 Good Practices, Compliance, and Security Hardening OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/security_and_hardening_guide/index |
Chapter 19. Encrypting block devices using LUKS | Chapter 19. Encrypting block devices using LUKS By using the disk encryption, you can protect the data on a block device by encrypting it. To access the device's decrypted contents, enter a passphrase or key as authentication. This is important for mobile computers and removable media because it helps to protect the device's contents even if it has been physically removed from the system. The LUKS format is a default implementation of block device encryption in Red Hat Enterprise Linux. 19.1. LUKS disk encryption Linux Unified Key Setup-on-disk-format (LUKS) provides a set of tools that simplifies managing the encrypted devices. With LUKS, you can encrypt block devices and enable multiple user keys to decrypt a master key. For bulk encryption of the partition, use this master key. Red Hat Enterprise Linux uses LUKS to perform block device encryption. By default, the option to encrypt the block device is unchecked during the installation. If you select the option to encrypt your disk, the system prompts you for a passphrase every time you boot the computer. This passphrase unlocks the bulk encryption key that decrypts your partition. If you want to modify the default partition table, you can select the partitions that you want to encrypt. This is set in the partition table settings. Ciphers The default cipher used for LUKS is aes-xts-plain64 . The default key size for LUKS is 512 bits. The default key size for LUKS with Anaconda XTS mode is 512 bits. The following are the available ciphers: Advanced Encryption Standard (AES) Twofish Serpent Operations performed by LUKS LUKS encrypts entire block devices and is therefore well-suited for protecting contents of mobile devices such as removable storage media or laptop disk drives. The underlying contents of the encrypted block device are arbitrary, which makes it useful for encrypting swap devices. This can also be useful with certain databases that use specially formatted block devices for data storage. LUKS uses the existing device mapper kernel subsystem. LUKS provides passphrase strengthening, which protects against dictionary attacks. LUKS devices contain multiple key slots, which means you can add backup keys or passphrases. Important LUKS is not recommended for the following scenarios: Disk-encryption solutions such as LUKS protect the data only when your system is off. After the system is on and LUKS has decrypted the disk, the files on that disk are available to anyone who have access to them. Scenarios that require multiple users to have distinct access keys to the same device. The LUKS1 format provides eight key slots and LUKS2 provides up to 32 key slots. Applications that require file-level encryption. Additional resources LUKS Project Home Page LUKS On-Disk Format Specification FIPS 197: Advanced Encryption Standard (AES) 19.2. LUKS versions in RHEL In Red Hat Enterprise Linux, the default format for LUKS encryption is LUKS2. The old LUKS1 format remains fully supported and it is provided as a format compatible with earlier Red Hat Enterprise Linux releases. LUKS2 re-encryption is considered more robust and safe to use as compared to LUKS1 re-encryption. The LUKS2 format enables future updates of various parts without a need to modify binary structures. Internally it uses JSON text format for metadata, provides redundancy of metadata, detects metadata corruption, and automatically repairs from a metadata copy. Important Do not use LUKS2 in systems that support only LUKS1. Since Red Hat Enterprise Linux 9.2, you can use the cryptsetup reencrypt command for both the LUKS versions to encrypt the disk. Online re-encryption The LUKS2 format supports re-encrypting encrypted devices while the devices are in use. For example, you do not have to unmount the file system on the device to perform the following tasks: Changing the volume key Changing the encryption algorithm When encrypting a non-encrypted device, you must still unmount the file system. You can remount the file system after a short initialization of the encryption. The LUKS1 format does not support online re-encryption. Conversion In certain situations, you can convert LUKS1 to LUKS2. The conversion is not possible specifically in the following scenarios: A LUKS1 device is marked as being used by a Policy-Based Decryption (PBD) Clevis solution. The cryptsetup tool does not convert the device when some luksmeta metadata are detected. A device is active. The device must be in an inactive state before any conversion is possible. 19.3. Options for data protection during LUKS2 re-encryption LUKS2 provides several options that prioritize performance or data protection during the re-encryption process. It provides the following modes for the resilience option, and you can select any of these modes by using the cryptsetup reencrypt --resilience resilience-mode /dev/sdx command: checksum The default mode. It balances data protection and performance. This mode stores individual checksums of the sectors in the re-encryption area, which the recovery process can detect for the sectors that were re-encrypted by LUKS2. The mode requires that the block device sector write is atomic. journal The safest mode but also the slowest. Since this mode journals the re-encryption area in the binary area, the LUKS2 writes the data twice. none The none mode prioritizes performance and provides no data protection. It protects the data only against safe process termination, such as the SIGTERM signal or the user pressing Ctrl + C key. Any unexpected system failure or application failure might result in data corruption. If a LUKS2 re-encryption process terminates unexpectedly by force, LUKS2 can perform the recovery in one of the following ways: Automatically By performing any one of the following actions triggers the automatic recovery action during the LUKS2 device open action: Executing the cryptsetup open command. Attaching the device with the systemd-cryptsetup command. Manually By using the cryptsetup repair /dev/sdx command on the LUKS2 device. Additional resources cryptsetup-reencrypt(8) and cryptsetup-repair(8) man pages on your system 19.4. Encrypting existing data on a block device using LUKS2 You can encrypt the existing data on a not yet encrypted device by using the LUKS2 format. A new LUKS header is stored in the head of the device. Prerequisites The block device has a file system. You have backed up your data. Warning You might lose your data during the encryption process due to a hardware, kernel, or human failure. Ensure that you have a reliable backup before you start encrypting the data. Procedure Unmount all file systems on the device that you plan to encrypt, for example: Make free space for storing a LUKS header. Use one of the following options that suits your scenario: In the case of encrypting a logical volume, you can extend the logical volume without resizing the file system. For example: Extend the partition by using partition management tools, such as parted . Shrink the file system on the device. You can use the resize2fs utility for the ext2, ext3, or ext4 file systems. Note that you cannot shrink the XFS file system. Initialize the encryption: Mount the device: Add an entry for a persistent mapping to the /etc/crypttab file: Find the luksUUID : Open /etc/crypttab in a text editor of your choice and add a device in this file: Replace a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325 with your device's luksUUID . Refresh initramfs with dracut : Add an entry for a persistent mounting to the /etc/fstab file: Find the file system's UUID of the active LUKS block device: Open /etc/fstab in a text editor of your choice and add a device in this file, for example: Replace 37bc2492-d8fa-4969-9d9b-bb64d3685aa9 with your file system's UUID. Resume the online encryption: Verification Verify if the existing data was encrypted: View the status of the encrypted blank block device: Additional resources cryptsetup(8) , cryptsetup-reencrypt(8) , lvextend(8) , resize2fs(8) , and parted(8) man pages on your system 19.5. Encrypting existing data on a block device using LUKS2 with a detached header You can encrypt existing data on a block device without creating free space for storing a LUKS header. The header is stored in a detached location, which also serves as an additional layer of security. The procedure uses the LUKS2 encryption format. Prerequisites The block device has a file system. You have backed up your data. Warning You might lose your data during the encryption process due to a hardware, kernel, or human failure. Ensure that you have a reliable backup before you start encrypting the data. Procedure Unmount all file systems on the device, for example: Initialize the encryption: Replace /home/header with a path to the file with a detached LUKS header. The detached LUKS header has to be accessible to unlock the encrypted device later. Mount the device: Resume the online encryption: Verification Verify if the existing data on a block device using LUKS2 with a detached header is encrypted: View the status of the encrypted blank block device: Additional resources cryptsetup(8) and cryptsetup-reencrypt(8) man pages on your system 19.6. Encrypting a blank block device using LUKS2 You can encrypt a blank block device, which you can use for an encrypted storage by using the LUKS2 format. Prerequisites A blank block device. You can use commands such as lsblk to find if there is no real data on that device, for example, a file system. Procedure Setup a partition as an encrypted LUKS partition: Open an encrypted LUKS partition: This unlocks the partition and maps it to a new device by using the device mapper. To not overwrite the encrypted data, this command alerts the kernel that the device is an encrypted device and addressed through LUKS by using the /dev/mapper/ device_mapped_name path. Create a file system to write encrypted data to the partition, which must be accessed through the device mapped name: Mount the device: Verification Verify if the blank block device is encrypted: View the status of the encrypted blank block device: Additional resources cryptsetup(8) , cryptsetup-open (8) , and cryptsetup-lusFormat(8) man pages on your system 19.7. Configuring the LUKS passphrase in the web console If you want to add encryption to an existing logical volume on your system, you can only do so through formatting the volume. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-storaged package is installed on your system. Available existing logical volume without encryption. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the panel, click Storage . In the Storage table, click the menu button ... for the storage device you want to encrypt and click Format . In the Encryption field , select the encryption specification, LUKS1 or LUKS2 . Set and confirm your new passphrase. Optional: Modify further encryption options. Finalize formatting settings. Click Format . 19.8. Changing the LUKS passphrase in the web console Change a LUKS passphrase on an encrypted disk or partition in the web console. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-storaged package is installed on your system. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the panel, click Storage . In the Storage table, select the disk with encrypted data. On the disk page, scroll to the Keys section and click the edit button. In the Change passphrase dialog window: Enter your current passphrase. Enter your new passphrase. Confirm your new passphrase. Click Save . 19.9. Creating a LUKS2 encrypted volume by using the storage RHEL system role You can use the storage role to create and configure a volume encrypted with LUKS by running an Ansible playbook. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: luks_password: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and configure a volume encrypted with LUKS ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs fs_label: <label> mount_point: /mnt/data encryption: true encryption_password: "{{ luks_password }}" For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Find the luksUUID value of the LUKS encrypted volume: View the encryption status of the volume: Verify the created LUKS encrypted volume: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory Encrypting block devices by using LUKS Ansible vault | [
"umount /dev/mapper/vg00-lv00",
"lvextend -L+ 32M /dev/mapper/vg00-lv00",
"cryptsetup reencrypt --encrypt --init-only --reduce-device-size 32M /dev/mapper/ vg00-lv00 lv00_encrypted /dev/mapper/ lv00_encrypted is now active and ready for online encryption.",
"mount /dev/mapper/ lv00_encrypted /mnt/lv00_encrypted",
"cryptsetup luksUUID /dev/mapper/ vg00-lv00 a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325",
"vi /etc/crypttab lv00_encrypted UUID= a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325 none",
"dracut -f --regenerate-all",
"blkid -p /dev/mapper/ lv00_encrypted /dev/mapper/ lv00-encrypted : UUID=\" 37bc2492-d8fa-4969-9d9b-bb64d3685aa9 \" BLOCK_SIZE=\"4096\" TYPE=\"xfs\" USAGE=\"filesystem\"",
"vi /etc/fstab UUID= 37bc2492-d8fa-4969-9d9b-bb64d3685aa9 /home auto rw,user,auto 0",
"cryptsetup reencrypt --resume-only /dev/mapper/ vg00-lv00 Enter passphrase for /dev/mapper/ vg00-lv00 : Auto-detected active dm device ' lv00_encrypted ' for data device /dev/mapper/ vg00-lv00 . Finished, time 00:31.130, 10272 MiB written, speed 330.0 MiB/s",
"cryptsetup luksDump /dev/mapper/ vg00-lv00 LUKS header information Version: 2 Epoch: 4 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 33554432 [bytes] length: (whole device) cipher: aes-xts-plain64 [...]",
"cryptsetup status lv00_encrypted /dev/mapper/ lv00_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/mapper/ vg00-lv00",
"umount /dev/ nvme0n1p1",
"cryptsetup reencrypt --encrypt --init-only --header /home/header /dev/ nvme0n1p1 nvme_encrypted WARNING! ======== Header file does not exist, do you want to create it? Are you sure? (Type 'yes' in capital letters): YES Enter passphrase for /home/header : Verify passphrase: /dev/mapper/ nvme_encrypted is now active and ready for online encryption.",
"mount /dev/mapper/ nvme_encrypted /mnt/nvme_encrypted",
"cryptsetup reencrypt --resume-only --header /home/header /dev/ nvme0n1p1 Enter passphrase for /dev/ nvme0n1p1 : Auto-detected active dm device 'nvme_encrypted' for data device /dev/ nvme0n1p1 . Finished, time 00m51s, 10 GiB written, speed 198.2 MiB/s",
"cryptsetup luksDump /home/header LUKS header information Version: 2 Epoch: 88 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: c4f5d274-f4c0-41e3-ac36-22a917ab0386 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 0 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes] [...]",
"cryptsetup status nvme_encrypted /dev/mapper/ nvme_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/ nvme0n1p1",
"cryptsetup luksFormat /dev/ nvme0n1p1 WARNING! ======== This will overwrite data on /dev/nvme0n1p1 irrevocably. Are you sure? (Type 'yes' in capital letters): YES Enter passphrase for /dev/ nvme0n1p1 : Verify passphrase:",
"cryptsetup open /dev/ nvme0n1p1 nvme0n1p1_encrypted Enter passphrase for /dev/ nvme0n1p1 :",
"mkfs -t ext4 /dev/mapper/ nvme0n1p1_encrypted",
"mount /dev/mapper/ nvme0n1p1_encrypted mount-point",
"cryptsetup luksDump /dev/ nvme0n1p1 LUKS header information Version: 2 Epoch: 3 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: 34ce4870-ffdf-467c-9a9e-345a53ed8a25 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes] [...]",
"cryptsetup status nvme0n1p1_encrypted /dev/mapper/ nvme0n1p1_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/ nvme0n1p1 sector size: 512 offset: 32768 sectors size: 20938752 sectors mode: read/write",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"luks_password: <password>",
"--- - name: Manage local storage hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and configure a volume encrypted with LUKS ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs fs_label: <label> mount_point: /mnt/data encryption: true encryption_password: \"{{ luks_password }}\"",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'cryptsetup luksUUID /dev/sdb' 4e4e7970-1822-470e-b55a-e91efe5d0f5c",
"ansible managed-node-01.example.com -m command -a 'cryptsetup status luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c' /dev/mapper/luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/sdb",
"ansible managed-node-01.example.com -m command -a 'cryptsetup luksDump /dev/sdb' LUKS header information Version: 2 Epoch: 3 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: 4e4e7970-1822-470e-b55a-e91efe5d0f5c Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/encrypting-block-devices-using-luks_managing-storage-devices |
2.2.4.2.2. Review the NFS Client | 2.2.4.2.2. Review the NFS Client Use the nosuid option to disallow the use of a setuid program. The nosuid option disables the set-user-identifier or set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program. Use this option on the client and the server side. The noexec option disables all executable files on the client. Use this to prevent users from inadvertently executing files placed in the file system being shared. The nosuid and noexec options are standard options for most, if not all, file systems. Use the nodev option to prevent " device-files " from being processed as a hardware device by the client. The resvport option is a client-side mount option and secure is the corresponding server-side export option (see explanation above). It restricts communication to a "reserved port". The reserved or "well known" ports are reserved for privileged users and processes such as the root user. Setting this option causes the client to use a reserved source port to communicate with the server. All versions of NFS now support mounting with Kerberos authentication. The mount option to enable this is: sec=krb5 . NFSv4 supports mounting with Kerberos using krb5i for integrity and krb5p for privacy protection. These are used when mounting with sec=krb5 , but need to be configured on the NFS server. Refer to the man page on exports ( man 5 exports ) for more information. The NFS man page ( man 5 nfs ) has a " SECURITY CONSIDERATIONS " section which explains the security enhancements in NFSv4 and contains all the NFS specific mount options. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-securing_nfs-mount_options-review_the_nfs_client |
Chapter 2. Host Security | Chapter 2. Host Security When deploying virtualization technologies on a Red Hat Enterprise Linux system, the host is responsible for managing and controlling access to the physical devices, storage, and network, but also to all virtualized guests. If the host system is compromised, the guests and their data become vulnerable as well. Therefore, securing the Red Hat Enterprise Linux host system is the first step towards ensuring a secure virtualization platform. 2.1. Securing the Host Physical Machine The following tasks and tips can assist you with securing and ensuring reliability, as well increasing the performance, of your Red Hat Enterprise Linux host. Ensure that SELinux is configured properly for your installation and is operating in enforcing mode: In addition to being a good security practice, the advanced virtualization security functionality provided by sVirt relies on SELinux. See Chapter 4, sVirt for more information on SELinux and sVirt. Remove or disable any unnecessary services such as AutoFS , NFS , FTP , HTTP , NIS , telnetd , or sendmail . Only add the minimum number of user accounts needed for platform management on the server and remove unnecessary user accounts. Limit direct access to the system to only those users who have a need to manage the system. Consider disallowing shared root access and instead use tools such as sudo to grant privileged access to administrators based on their administrative roles. Avoid running any unessential applications on your host. Running applications on the host may impact virtual machine performance and can affect server stability. Any application that may crash the server will also cause all virtual machines on the server to fail. In addition, vulnerable applications can become vectors for an attack on the host. Use a central location for virtual machine installations and images. Virtual machine images should be stored under /var/lib/libvirt/images/ . If you are using a different directory for your virtual machine images make sure you add the directory to your SELinux policy and relabel it before starting the installation. Use of shareable, network storage in a central location is highly recommended. Run only the services necessary to support the use and management of your guest systems. If you need to provide additional services, such as file or print services, consider running those services on a Red Hat Enterprise Linux guest. Ensure that auditing is enabled on the host system and that libvirt is configured to generate audit records. When auditing is enabled, libvirt generates audit records for changes to guest configuration and start/stop events, which can help you track the guest's state. In addition, the libvirt audit events can also be viewed using the specialized auvirt utility. For more information, use the man auvirt command. Ensure that any remote management of the system takes place only over secured network channels. Utilities such as SSH and network protocols such as TLS or SSL provide both authentication and data encryption to help ensure that only approved administrators can manage the system remotely. Ensure that the firewall is configured properly for your installation and is activated at boot. Only network ports needed for the use and management of the system should be allowed. Do not grant guests with direct access to entire disks or block devices (for example, /dev/sdb ); instead, use partitions (for example, /dev/sdb1 ) or LVM volumes for guest storage. Attaching a USB device, Physical Function or physical device when SR-IOV is not available to a virtual machine could provide access to the device which is sufficient enough to overwrite that device's firmware. This presents a potential security issue by which an attacker could overwrite the device's firmware with malicious code and cause problems when moving the device between virtual machines or at host boot time. It is advised to use SR-IOV Virtual Function device assignment where applicable. Note For more security tips and instructions for your host system, see the Red Hat Enterprise Linux Security Guide . | [
"setenforce 1"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_security_guide/chap-virtualization_security_guide-host_security |
Chapter 3. Creating applications | Chapter 3. Creating applications 3.1. Using templates The following sections provide an overview of templates, as well as how to use and create them. 3.1.1. Understanding templates A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Container Platform. A template can be processed to create anything you have permission to create within a project, for example services, build configurations, and deployment configurations. A template can also define a set of labels to apply to every object defined in the template. You can create a list of objects from a template using the CLI or, if a template has been uploaded to your project or the global template library, using the web console. 3.1.2. Uploading a template If you have a JSON or YAML file that defines a template, you can upload the template to projects using the CLI. This saves the template to the project for repeated use by any user with appropriate access to that project. Instructions about writing your own templates are provided later in this topic. Procedure Upload a template using one of the following methods: Upload a template to your current project's template library, pass the JSON or YAML file with the following command: USD oc create -f <filename> Upload a template to a different project using the -n option with the name of the project: USD oc create -f <filename> -n <project> The template is now available for selection using the web console or the CLI. 3.1.3. Creating an application by using the web console You can use the web console to create an application from a template. Procedure Select Developer from the context selector at the top of the web console navigation menu. While in the desired project, click +Add Click All services in the Developer Catalog tile. Click Builder Images under Type to see the available builder images. Note Only image stream tags that have the builder tag listed in their annotations appear in this list, as demonstrated here: kind: "ImageStream" apiVersion: "image.openshift.io/v1" metadata: name: "ruby" creationTimestamp: null spec: # ... tags: - name: "2.6" annotations: description: "Build and run Ruby 2.6 applications" iconClass: "icon-ruby" tags: "builder,ruby" 1 supports: "ruby:2.6,ruby" version: "2.6" # ... 1 Including builder here ensures this image stream tag appears in the web console as a builder. Modify the settings in the new application screen to configure the objects to support your application. 3.1.4. Creating objects from templates by using the CLI You can use the CLI to process templates and use the configuration that is generated to create objects. 3.1.4.1. Adding labels Labels are used to manage and organize generated objects, such as pods. The labels specified in the template are applied to every object that is generated from the template. Procedure Add labels in the template from the command line: USD oc process -f <filename> -l name=otherLabel 3.1.4.2. Listing parameters The list of parameters that you can override are listed in the parameters section of the template. Procedure You can list parameters with the CLI by using the following command and specifying the file to be used: USD oc process --parameters -f <filename> Alternatively, if the template is already uploaded: USD oc process --parameters -n <project> <template_name> For example, the following shows the output when listing the parameters for one of the quick start templates in the default openshift project: USD oc process --parameters -n openshift rails-postgresql-example Example output NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB The output identifies several parameters that are generated with a regular expression-like generator when the template is processed. 3.1.4.3. Generating a list of objects Using the CLI, you can process a file defining a template to return the list of objects to standard output. Procedure Process a file defining a template to return the list of objects to standard output: USD oc process -f <filename> Alternatively, if the template has already been uploaded to the current project: USD oc process <template_name> Create objects from a template by processing the template and piping the output to oc create : USD oc process -f <filename> | oc create -f - Alternatively, if the template has already been uploaded to the current project: USD oc process <template> | oc create -f - You can override any parameter values defined in the file by adding the -p option for each <name>=<value> pair you want to override. A parameter reference appears in any text field inside the template items. For example, in the following the POSTGRESQL_USER and POSTGRESQL_DATABASE parameters of a template are overridden to output a configuration with customized environment variables: Creating a List of objects from a template USD oc process -f my-rails-postgresql \ -p POSTGRESQL_USER=bob \ -p POSTGRESQL_DATABASE=mydatabase The JSON file can either be redirected to a file or applied directly without uploading the template by piping the processed output to the oc create command: USD oc process -f my-rails-postgresql \ -p POSTGRESQL_USER=bob \ -p POSTGRESQL_DATABASE=mydatabase \ | oc create -f - If you have large number of parameters, you can store them in a file and then pass this file to oc process : USD cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase USD oc process -f my-rails-postgresql --param-file=postgres.env You can also read the environment from standard input by using "-" as the argument to --param-file : USD sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=- 3.1.5. Modifying uploaded templates You can edit a template that has already been uploaded to your project. Procedure Modify a template that has already been uploaded: USD oc edit template <template> 3.1.6. Using instant app and quick start templates OpenShift Container Platform provides a number of default instant app and quick start templates to make it easy to quickly get started creating a new application for different languages. Templates are provided for Rails (Ruby), Django (Python), Node.js, CakePHP (PHP), and Dancer (Perl). Your cluster administrator must create these templates in the default, global openshift project so you have access to them. By default, the templates build using a public source repository on GitHub that contains the necessary application code. Procedure You can list the available default instant app and quick start templates with: USD oc get templates -n openshift To modify the source and build your own version of the application: Fork the repository referenced by the template's default SOURCE_REPOSITORY_URL parameter. Override the value of the SOURCE_REPOSITORY_URL parameter when creating from the template, specifying your fork instead of the default value. By doing this, the build configuration created by the template now points to your fork of the application code, and you can modify the code and rebuild the application at will. Note Some of the instant app and quick start templates define a database deployment configuration. The configuration they define uses ephemeral storage for the database content. These templates should be used for demonstration purposes only as all database data is lost if the database pod restarts for any reason. 3.1.6.1. Quick start templates A quick start template is a basic example of an application running on OpenShift Container Platform. Quick starts come in a variety of languages and frameworks, and are defined in a template, which is constructed from a set of services, build configurations, and deployment configurations. This template references the necessary images and source repositories to build and deploy the application. To explore a quick start, create an application from a template. Your administrator must have already installed these templates in your OpenShift Container Platform cluster, in which case you can simply select it from the web console. Quick starts refer to a source repository that contains the application source code. To customize the quick start, fork the repository and, when creating an application from the template, substitute the default source repository name with your forked repository. This results in builds that are performed using your source code instead of the provided example source. You can then update the code in your source repository and launch a new build to see the changes reflected in the deployed application. 3.1.6.1.1. Web framework quick start templates These quick start templates provide a basic application of the indicated framework and language: CakePHP: a PHP web framework that includes a MySQL database Dancer: a Perl web framework that includes a MySQL database Django: a Python web framework that includes a PostgreSQL database NodeJS: a NodeJS web application that includes a MongoDB database Rails: a Ruby web framework that includes a PostgreSQL database 3.1.7. Writing templates You can define new templates to make it easy to recreate all the objects of your application. The template defines the objects it creates along with some metadata to guide the creation of those objects. The following is an example of a simple template object definition (YAML): apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: "Description" iconClass: "icon-redis" tags: "database,nosql" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: USD{REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master 3.1.7.1. Writing the template description The template description informs you what the template does and helps you find it when searching in the web console. Additional metadata beyond the template name is optional, but useful to have. In addition to general descriptive information, the metadata also includes a set of tags. Useful tags include the name of the language the template is related to for example, Java, PHP, Ruby, and so on. The following is an example of template description metadata: kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: "CakePHP MySQL Example (Ephemeral)" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing." 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: "quickstart,php,cakephp" 5 iconClass: icon-php 6 openshift.io/provider-display-name: "Red Hat, Inc." 7 openshift.io/documentation-url: "https://github.com/sclorg/cakephp-ex" 8 openshift.io/support-url: "https://access.redhat.com" 9 message: "Your admin credentials are USD{ADMIN_USERNAME}:USD{ADMIN_PASSWORD}" 10 1 The unique name of the template. 2 A brief, user-friendly name, which can be employed by user interfaces. 3 A description of the template. Include enough detail that users understand what is being deployed and any caveats they must know before deploying. It should also provide links to additional information, such as a README file. Newlines can be included to create paragraphs. 4 Additional template description. This may be displayed by the service catalog, for example. 5 Tags to be associated with the template for searching and grouping. Add tags that include it into one of the provided catalog categories. Refer to the id and categoryAliases in CATALOG_CATEGORIES in the console constants file. The categories can also be customized for the whole cluster. 6 An icon to be displayed with your template in the web console. Example 3.1. Available icons icon-3scale icon-aerogear icon-amq icon-angularjs icon-ansible icon-apache icon-beaker icon-camel icon-capedwarf icon-cassandra icon-catalog-icon icon-clojure icon-codeigniter icon-cordova icon-datagrid icon-datavirt icon-debian icon-decisionserver icon-django icon-dotnet icon-drupal icon-eap icon-elastic icon-erlang icon-fedora icon-freebsd icon-git icon-github icon-gitlab icon-glassfish icon-go-gopher icon-golang icon-grails icon-hadoop icon-haproxy icon-helm icon-infinispan icon-jboss icon-jenkins icon-jetty icon-joomla icon-jruby icon-js icon-knative icon-kubevirt icon-laravel icon-load-balancer icon-mariadb icon-mediawiki icon-memcached icon-mongodb icon-mssql icon-mysql-database icon-nginx icon-nodejs icon-openjdk icon-openliberty icon-openshift icon-openstack icon-other-linux icon-other-unknown icon-perl icon-phalcon icon-php icon-play iconpostgresql icon-processserver icon-python icon-quarkus icon-rabbitmq icon-rails icon-redhat icon-redis icon-rh-integration icon-rh-spring-boot icon-rh-tomcat icon-ruby icon-scala icon-serverlessfx icon-shadowman icon-spring-boot icon-spring icon-sso icon-stackoverflow icon-suse icon-symfony icon-tomcat icon-ubuntu icon-vertx icon-wildfly icon-windows icon-wordpress icon-xamarin icon-zend 7 The name of the person or organization providing the template. 8 A URL referencing further documentation for the template. 9 A URL where support can be obtained for the template. 10 An instructional message that is displayed when this template is instantiated. This field should inform the user how to use the newly created resources. Parameter substitution is performed on the message before being displayed so that generated credentials and other parameters can be included in the output. Include links to any -steps documentation that users should follow. 3.1.7.2. Writing template labels Templates can include a set of labels. These labels are added to each object created when the template is instantiated. Defining a label in this way makes it easy for users to find and manage all the objects created from a particular template. The following is an example of template object labels: kind: "Template" apiVersion: "v1" ... labels: template: "cakephp-mysql-example" 1 app: "USD{NAME}" 2 1 A label that is applied to all objects created from this template. 2 A parameterized label that is also applied to all objects created from this template. Parameter expansion is carried out on both label keys and values. 3.1.7.3. Writing template parameters Parameters allow a value to be supplied by you or generated when the template is instantiated. Then, that value is substituted wherever the parameter is referenced. References can be defined in any field in the objects list field. This is useful for generating random passwords or allowing you to supply a hostname or other user-specific value that is required to customize the template. Parameters can be referenced in two ways: As a string value by placing values in the form USD{PARAMETER_NAME} in any string field in the template. As a JSON or YAML value by placing values in the form USD{{PARAMETER_NAME}} in place of any field in the template. When using the USD{PARAMETER_NAME} syntax, multiple parameter references can be combined in a single field and the reference can be embedded within fixed data, such as "http://USD{PARAMETER_1}USD{PARAMETER_2}" . Both parameter values are substituted and the resulting value is a quoted string. When using the USD{{PARAMETER_NAME}} syntax only a single parameter reference is allowed and leading and trailing characters are not permitted. The resulting value is unquoted unless, after substitution is performed, the result is not a valid JSON object. If the result is not a valid JSON value, the resulting value is quoted and treated as a standard string. A single parameter can be referenced multiple times within a template and it can be referenced using both substitution syntaxes within a single template. A default value can be provided, which is used if you do not supply a different value: The following is an example of setting an explicit value as the default value: parameters: - name: USERNAME description: "The user name for Joe" value: joe Parameter values can also be generated based on rules specified in the parameter definition, for example generating a parameter value: parameters: - name: PASSWORD description: "The random user password" generate: expression from: "[a-zA-Z0-9]{12}" In the example, processing generates a random password 12 characters long consisting of all upper and lowercase alphabet letters and numbers. The syntax available is not a full regular expression syntax. However, you can use \w , \d , \a , and \A modifiers: [\w]{10} produces 10 alphabet characters, numbers, and underscores. This follows the PCRE standard and is equal to [a-zA-Z0-9_]{10} . [\d]{10} produces 10 numbers. This is equal to [0-9]{10} . [\a]{10} produces 10 alphabetical characters. This is equal to [a-zA-Z]{10} . [\A]{10} produces 10 punctuation or symbol characters. This is equal to [~!@#USD%\^&*()\-_+={}\[\]\\|<,>.?/"';:`]{10} . Note Depending on if the template is written in YAML or JSON, and the type of string that the modifier is embedded within, you might need to escape the backslash with a second backslash. The following examples are equivalent: Example YAML template with a modifier parameters: - name: singlequoted_example generate: expression from: '[\A]{10}' - name: doublequoted_example generate: expression from: "[\\A]{10}" Example JSON template with a modifier { "parameters": [ { "name": "json_example", "generate": "expression", "from": "[\\A]{10}" } ] } Here is an example of a full template with parameter definitions and references: kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: "USD{SOURCE_REPOSITORY_URL}" 1 ref: "USD{SOURCE_REPOSITORY_REF}" contextDir: "USD{CONTEXT_DIR}" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: "USD{{REPLICA_COUNT}}" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: "[a-zA-Z0-9]{40}" 9 - name: REPLICA_COUNT description: Number of replicas to run value: "2" required: true message: "... The GitHub webhook secret is USD{GITHUB_WEBHOOK_SECRET} ..." 10 1 This value is replaced with the value of the SOURCE_REPOSITORY_URL parameter when the template is instantiated. 2 This value is replaced with the unquoted value of the REPLICA_COUNT parameter when the template is instantiated. 3 The name of the parameter. This value is used to reference the parameter within the template. 4 The user-friendly name for the parameter. This is displayed to users. 5 A description of the parameter. Provide more detailed information for the purpose of the parameter, including any constraints on the expected value. Descriptions should use complete sentences to follow the console's text standards. Do not make this a duplicate of the display name. 6 A default value for the parameter which is used if you do not override the value when instantiating the template. Avoid using default values for things like passwords, instead use generated parameters in combination with secrets. 7 Indicates this parameter is required, meaning you cannot override it with an empty value. If the parameter does not provide a default or generated value, you must supply a value. 8 A parameter which has its value generated. 9 The input to the generator. In this case, the generator produces a 40 character alphanumeric value including upper and lowercase characters. 10 Parameters can be included in the template message. This informs you about generated values. 3.1.7.4. Writing the template object list The main portion of the template is the list of objects which is created when the template is instantiated. This can be any valid API object, such as a build configuration, deployment configuration, or service. The object is created exactly as defined here, with any parameter values substituted in prior to creation. The definition of these objects can reference parameters defined earlier. The following is an example of an object list: kind: "Template" apiVersion: "v1" metadata: name: my-template objects: - kind: "Service" 1 apiVersion: "v1" metadata: name: "cakephp-mysql-example" annotations: description: "Exposes and load balances the application pods" spec: ports: - name: "web" port: 8080 targetPort: 8080 selector: name: "cakephp-mysql-example" 1 The definition of a service, which is created by this template. Note If an object definition metadata includes a fixed namespace field value, the field is stripped out of the definition during template instantiation. If the namespace field contains a parameter reference, normal parameter substitution is performed and the object is created in whatever namespace the parameter substitution resolved the value to, assuming the user has permission to create objects in that namespace. 3.1.7.5. Marking a template as bindable The Template Service Broker advertises one service in its catalog for each template object of which it is aware. By default, each of these services is advertised as being bindable, meaning an end user is permitted to bind against the provisioned service. Procedure Template authors can prevent end users from binding against services provisioned from a given template. Prevent end user from binding against services provisioned from a given template by adding the annotation template.openshift.io/bindable: "false" to the template. 3.1.7.6. Exposing template object fields Template authors can indicate that fields of particular objects in a template should be exposed. The Template Service Broker recognizes exposed fields on ConfigMap , Secret , Service , and Route objects, and returns the values of the exposed fields when a user binds a service backed by the broker. To expose one or more fields of an object, add annotations prefixed by template.openshift.io/expose- or template.openshift.io/base64-expose- to the object in the template. Each annotation key, with its prefix removed, is passed through to become a key in a bind response. Each annotation value is a Kubernetes JSONPath expression, which is resolved at bind time to indicate the object field whose value should be returned in the bind response. Note Bind response key-value pairs can be used in other parts of the system as environment variables. Therefore, it is recommended that every annotation key with its prefix removed should be a valid environment variable name - beginning with a character A-Z , a-z , or _ , and being followed by zero or more characters A-Z , a-z , 0-9 , or _ . Note Unless escaped with a backslash, Kubernetes' JSONPath implementation interprets characters such as . , @ , and others as metacharacters, regardless of their position in the expression. Therefore, for example, to refer to a ConfigMap datum named my.key , the required JSONPath expression would be {.data['my\.key']} . Depending on how the JSONPath expression is then written in YAML, an additional backslash might be required, for example "{.data['my\\.key']}" . The following is an example of different objects' fields being exposed: kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: "{.data['my\\.username']}" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: "{.data['password']}" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: "{.spec.clusterIP}:{.spec.ports[?(.name==\"web\")].port}" spec: ports: - name: "web" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: "http://{.spec.host}{.spec.path}" spec: path: mypath An example response to a bind operation given the above partial template follows: { "credentials": { "username": "foo", "password": "YmFy", "service_ip_port": "172.30.12.34:8080", "uri": "http://route-test.router.default.svc.cluster.local/mypath" } } Procedure Use the template.openshift.io/expose- annotation to return the field value as a string. This is convenient, although it does not handle arbitrary binary data. If you want to return binary data, use the template.openshift.io/base64-expose- annotation instead to base64 encode the data before it is returned. 3.1.7.7. Waiting for template readiness Template authors can indicate that certain objects within a template should be waited for before a template instantiation by the service catalog, Template Service Broker, or TemplateInstance API is considered complete. To use this feature, mark one or more objects of kind Build , BuildConfig , Deployment , DeploymentConfig , Job , or StatefulSet in a template with the following annotation: "template.alpha.openshift.io/wait-for-ready": "true" Template instantiation is not complete until all objects marked with the annotation report ready. Similarly, if any of the annotated objects report failed, or if the template fails to become ready within a fixed timeout of one hour, the template instantiation fails. For the purposes of instantiation, readiness and failure of each object kind are defined as follows: Kind Readiness Failure Build Object reports phase complete. Object reports phase canceled, error, or failed. BuildConfig Latest associated build object reports phase complete. Latest associated build object reports phase canceled, error, or failed. Deployment Object reports new replica set and deployment available. This honors readiness probes defined on the object. Object reports progressing condition as false. DeploymentConfig Object reports new replication controller and deployment available. This honors readiness probes defined on the object. Object reports progressing condition as false. Job Object reports completion. Object reports that one or more failures have occurred. StatefulSet Object reports all replicas ready. This honors readiness probes defined on the object. Not applicable. The following is an example template extract, which uses the wait-for-ready annotation. Further examples can be found in the OpenShift Container Platform quick start templates. kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: ... annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: "true" spec: ... - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: ... annotations: template.alpha.openshift.io/wait-for-ready: "true" spec: ... - kind: Service apiVersion: v1 metadata: name: ... spec: ... Additional recommendations Set memory, CPU, and storage default sizes to make sure your application is given enough resources to run smoothly. Avoid referencing the latest tag from images if that tag is used across major versions. This can cause running applications to break when new images are pushed to that tag. A good template builds and deploys cleanly without requiring modifications after the template is deployed. 3.1.7.8. Creating a template from existing objects Rather than writing an entire template from scratch, you can export existing objects from your project in YAML form, and then modify the YAML from there by adding parameters and other customizations as template form. Procedure Export objects in a project in YAML form: USD oc get -o yaml all > <yaml_filename> You can also substitute a particular resource type or multiple resources instead of all . Run oc get -h for more examples. The object types included in oc get -o yaml all are: BuildConfig Build DeploymentConfig ImageStream Pod ReplicationController Route Service Note Using the all alias is not recommended because the contents might vary across different clusters and versions. Instead, specify all required resources. 3.2. Creating applications by using the Developer perspective The Developer perspective in the web console provides you the following options from the +Add view to create applications and associated services and deploy them on OpenShift Container Platform: Getting started resources : Use these resources to help you get started with Developer Console. You can choose to hide the header using the Options menu . Creating applications using samples : Use existing code samples to get started with creating applications on the OpenShift Container Platform. Build with guided documentation : Follow the guided documentation to build applications and familiarize yourself with key concepts and terminologies. Explore new developer features : Explore the new features and resources within the Developer perspective. Developer catalog : Explore the Developer Catalog to select the required applications, services, or source to image builders, and then add it to your project. All Services : Browse the catalog to discover services across OpenShift Container Platform. Database : Select the required database service and add it to your application. Operator Backed : Select and deploy the required Operator-managed service. Helm chart : Select the required Helm chart to simplify deployment of applications and services. Devfile : Select a devfile from the Devfile registry to declaratively define a development environment. Event Source : Select an event source to register interest in a class of events from a particular system. Note The Managed services option is also available if the RHOAS Operator is installed. Git repository : Import an existing codebase, Devfile, or Dockerfile from your Git repository using the From Git , From Devfile , or From Dockerfile options respectively, to build and deploy an application on OpenShift Container Platform. Container images : Use existing images from an image stream or registry to deploy it on to the OpenShift Container Platform. Pipelines : Use Tekton pipeline to create CI/CD pipelines for your software delivery process on the OpenShift Container Platform. Serverless : Explore the Serverless options to create, build, and deploy stateless and serverless applications on the OpenShift Container Platform. Channel : Create a Knative channel to create an event forwarding and persistence layer with in-memory and reliable implementations. Samples : Explore the available sample applications to create, build, and deploy an application quickly. Quick Starts : Explore the quick start options to create, import, and run applications with step-by-step instructions and tasks. From Local Machine : Explore the From Local Machine tile to import or upload files on your local machine for building and deploying applications easily. Import YAML : Upload a YAML file to create and define resources for building and deploying applications. Upload JAR file : Upload a JAR file to build and deploy Java applications. Share my Project : Use this option to add or remove users to a project and provide accessibility options to them. Helm Chart repositories : Use this option to add Helm Chart repositories in a namespace. Re-ordering of resources : Use these resources to re-order pinned resources added to your navigation pane. The drag-and-drop icon is displayed on the left side of the pinned resource when you hover over it in the navigation pane. The dragged resource can be dropped only in the section where it resides. Note that certain options, such as Pipelines , Event Source , and Import Virtual Machines , are displayed only when the OpenShift Pipelines Operator , OpenShift Serverless Operator , and OpenShift Virtualization Operator are installed, respectively. 3.2.1. Prerequisites To create applications using the Developer perspective ensure that: You have logged in to the web console . You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. To create serverless applications, in addition to the preceding prerequisites, ensure that: You have installed the OpenShift Serverless Operator . You have created a KnativeServing resource in the knative-serving namespace . 3.2.2. Creating sample applications You can use the sample applications in the +Add flow of the Developer perspective to create, build, and deploy applications quickly. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click the Samples tile to see the Samples page. On the Samples page, select one of the available sample applications to see the Create Sample Application form. In the Create Sample Application Form : In the Name field, the deployment name is displayed by default. You can modify this name as required. In the Builder Image Version , a builder image is selected by default. You can modify this image version by using the Builder Image Version drop-down list. A sample Git repository URL is added by default. Click Create to create the sample application. The build status of the sample application is displayed on the Topology view. After the sample application is created, you can see the deployment added to the application. 3.2.3. Creating applications by using Quick Starts The Quick Starts page shows you how to create, import, and run applications on OpenShift Container Platform, with step-by-step instructions and tasks. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click the Getting Started resources Build with guided documentation View all quick starts link to view the Quick Starts page. In the Quick Starts page, click the tile for the quick start that you want to use. Click Start to begin the quick start. Perform the steps that are displayed. 3.2.4. Importing a codebase from Git to create an application You can use the Developer perspective to create, build, and deploy an application on OpenShift Container Platform using an existing codebase in GitHub. The following procedure walks you through the From Git option in the Developer perspective to create an application. Procedure In the +Add view, click From Git in the Git Repository tile to see the Import from git form. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. For example, enter the URL of this sample Node.js application https://github.com/sclorg/nodejs-ex . The URL is then validated. Optional: You can click Show Advanced Git Options to add details such as: Git Reference to point to code in a specific branch, tag, or commit to be used to build the application. Context Dir to specify the subdirectory for the application source code you want to use to build the application. Source Secret to create a Secret Name with credentials for pulling your source code from a private repository. Optional: You can import a Devfile , a Dockerfile , Builder Image , or a Serverless Function through your Git repository to further customize your deployment. If your Git repository contains a Devfile , a Dockerfile , a Builder Image , or a func.yaml , it is automatically detected and populated on the respective path fields. If a Devfile , a Dockerfile , or a Builder Image are detected in the same repository, the Devfile is selected by default. If func.yaml is detected in the Git repository, the Import Strategy changes to Serverless Function . Alternatively, you can create a serverless function by clicking Create Serverless function in the +Add view using the Git repository URL. To edit the file import type and select a different strategy, click Edit import strategy option. If multiple Devfiles , a Dockerfiles , or a Builder Images are detected, to import a specific instance, specify the respective paths relative to the context directory. After the Git URL is validated, the recommended builder image is selected and marked with a star. If the builder image is not auto-detected, select a builder image. For the https://github.com/sclorg/nodejs-ex Git URL, by default the Node.js builder image is selected. Optional: Use the Builder Image Version drop-down to specify a version. Optional: Use the Edit import strategy to select a different strategy. Optional: For the Node.js builder image, use the Run command field to override the command to run the application. In the General section: In the Application field, enter a unique name for the application grouping, for example, myapp . Ensure that the application name is unique in a namespace. The Name field to identify the resources created for this application is automatically populated based on the Git repository URL if there are no existing applications. If there are existing applications, you can choose to deploy the component within an existing application, create a new application, or keep the component unassigned. Note The resource name must be unique in a namespace. Modify the resource name if you get an error. In the Resources section, select: Deployment , to create an application in plain Kubernetes style. Deployment Config , to create an OpenShift Container Platform style application. Serverless Deployment , to create a Knative service. Note To set the default resource preference for importing an application, go to User Preferences Applications Resource type field. The Serverless Deployment option is displayed in the Import from Git form only if the OpenShift Serverless Operator is installed in your cluster. The Resources section is not available while creating a serverless function. For further details, refer to the OpenShift Serverless documentation. In the Pipelines section, select Add Pipeline , and then click Show Pipeline Visualization to see the pipeline for the application. A default pipeline is selected, but you can choose the pipeline you want from the list of available pipelines for the application. Note The Add pipeline checkbox is checked and Configure PAC is selected by default if the following criterias are fulfilled: Pipeline operator is installed pipelines-as-code is enabled .tekton directory is detected in the Git repository Add a webhook to your repository. If Configure PAC is checked and the GitHub App is set up, you can see the Use GitHub App and Setup a webhook options. If GitHub App is not set up, you can only see the Setup a webhook option: Go to Settings Webhooks and click Add webhook . Set the Payload URL to the Pipelines as Code controller public URL. Select the content type as application/json . Add a webhook secret and note it in an alternate location. With openssl installed on your local machine, generate a random secret. Click Let me select individual events and select these events: Commit comments , Issue comments , Pull request , and Pushes . Click Add webhook . Optional: In the Advanced Options section, the Target port and the Create a route to the application is selected by default so that you can access your application using a publicly available URL. If your application does not expose its data on the default public port, 80, clear the check box, and set the target port number you want to expose. Optional: You can use the following advanced options to further customize your application: Routing By clicking the Routing link, you can perform the following actions: Customize the hostname for the route. Specify the path the router watches. Select the target port for the traffic from the drop-down list. Secure your route by selecting the Secure Route check box. Select the required TLS termination type and set a policy for insecure traffic from the respective drop-down lists. Note For serverless applications, the Knative service manages all the routing options above. However, you can customize the target port for traffic, if required. If the target port is not specified, the default port of 8080 is used. Domain mapping If you are creating a Serverless Deployment , you can add a custom domain mapping to the Knative service during creation. In the Advanced options section, click Show advanced Routing options . If the domain mapping CR that you want to map to the service already exists, you can select it from the Domain mapping drop-down menu. If you want to create a new domain mapping CR, type the domain name into the box, and select the Create option. For example, if you type in example.com , the Create option is Create "example.com" . Health Checks Click the Health Checks link to add Readiness, Liveness, and Startup probes to your application. All the probes have prepopulated default data; you can add the probes with the default data or customize it as required. To customize the health probes: Click Add Readiness Probe , if required, modify the parameters to check if the container is ready to handle requests, and select the check mark to add the probe. Click Add Liveness Probe , if required, modify the parameters to check if a container is still running, and select the check mark to add the probe. Click Add Startup Probe , if required, modify the parameters to check if the application within the container has started, and select the check mark to add the probe. For each of the probes, you can specify the request type - HTTP GET , Container Command , or TCP Socket , from the drop-down list. The form changes as per the selected request type. You can then modify the default values for the other parameters, such as the success and failure thresholds for the probe, number of seconds before performing the first probe after the container starts, frequency of the probe, and the timeout value. Build Configuration and Deployment Click the Build Configuration and Deployment links to see the respective configuration options. Some options are selected by default; you can customize them further by adding the necessary triggers and environment variables. For serverless applications, the Deployment option is not displayed as the Knative configuration resource maintains the desired state for your deployment instead of a DeploymentConfig resource. Scaling Click the Scaling link to define the number of pods or instances of the application you want to deploy initially. If you are creating a serverless deployment, you can also configure the following settings: Min Pods determines the lower limit for the number of pods that must be running at any given time for a Knative service. This is also known as the minScale setting. Max Pods determines the upper limit for the number of pods that can be running at any given time for a Knative service. This is also known as the maxScale setting. Concurrency target determines the number of concurrent requests desired for each instance of the application at a given time. Concurrency limit determines the limit for the number of concurrent requests allowed for each instance of the application at a given time. Concurrency utilization determines the percentage of the concurrent requests limit that must be met before Knative scales up additional pods to handle additional traffic. Autoscale window defines the time window over which metrics are averaged to provide input for scaling decisions when the autoscaler is not in panic mode. A service is scaled-to-zero if no requests are received during this window. The default duration for the autoscale window is 60s . This is also known as the stable window. Resource Limit Click the Resource Limit link to set the amount of CPU and Memory resources a container is guaranteed or allowed to use when running. Labels Click the Labels link to add custom labels to your application. Click Create to create the application and a success notification is displayed. You can see the build status of the application in the Topology view. 3.2.5. Creating applications by deploying container image You can use an external image registry or an image stream tag from an internal registry to deploy an application on your cluster. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click Container images to view the Deploy Images page. In the Image section: Select Image name from external registry to deploy an image from a public or a private registry, or select Image stream tag from internal registry to deploy an image from an internal registry. Select an icon for your image in the Runtime icon tab. In the General section: In the Application name field, enter a unique name for the application grouping. In the Name field, enter a unique name to identify the resources created for this component. In the Resource type section, select the resource type to generate: Select Deployment to enable declarative updates for Pod and ReplicaSet objects. Select DeploymentConfig to define the template for a Pod object, and manage deploying new images and configuration sources. Select Serverless Deployment to enable scaling to zero when idle. Click Create . You can view the build status of the application in the Topology view. 3.2.6. Deploying a Java application by uploading a JAR file You can use the web console Developer perspective to upload a JAR file by using the following options: Navigate to the +Add view of the Developer perspective, and click Upload JAR file in the From Local Machine tile. Browse and select your JAR file, or drag a JAR file to deploy your application. Navigate to the Topology view and use the Upload JAR file option, or drag a JAR file to deploy your application. Use the in-context menu in the Topology view, and then use the Upload JAR file option to upload your JAR file to deploy your application. Prerequisites The Cluster Samples Operator must be installed by a cluster administrator. You have access to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the Topology view, right-click anywhere to view the Add to Project menu. Hover over the Add to Project menu to see the menu options, and then select the Upload JAR file option to see the Upload JAR file form. Alternatively, you can drag the JAR file into the Topology view. In the JAR file field, browse for the required JAR file on your local machine and upload it. Alternatively, you can drag the JAR file on to the field. A toast alert is displayed at the top right if an incompatible file type is dragged into the Topology view. A field error is displayed if an incompatible file type is dropped on the field in the upload form. The runtime icon and builder image are selected by default. If a builder image is not auto-detected, select a builder image. If required, you can change the version using the Builder Image Version drop-down list. Optional: In the Application Name field, enter a unique name for your application to use for resource labelling. In the Name field, enter a unique component name for the associated resources. Optional: Use the Resource type drop-down list to change the resource type. In the Advanced options menu, click Create a Route to the Application to configure a public URL for your deployed application. Click Create to deploy the application. A toast notification is shown to notify you that the JAR file is being uploaded. The toast notification also includes a link to view the build logs. Note If you attempt to close the browser tab while the build is running, a web alert is displayed. After the JAR file is uploaded and the application is deployed, you can view the application in the Topology view. 3.2.7. Using the Devfile registry to access devfiles You can use the devfiles in the +Add flow of the Developer perspective to create an application. The +Add flow provides a complete integration with the devfile community registry . A devfile is a portable YAML file that describes your development environment without needing to configure it from scratch. Using the Devfile registry , you can use a preconfigured devfile to create an application. Procedure Navigate to Developer Perspective +Add Developer Catalog All Services . A list of all the available services in the Developer Catalog is displayed. Under Type , click Devfiles to browse for devfiles that support a particular language or framework. Alternatively, you can use the keyword filter to search for a particular devfile using their name, tag, or description. Click the devfile you want to use to create an application. The devfile tile displays the details of the devfile, including the name, description, provider, and the documentation of the devfile. Click Create to create an application and view the application in the Topology view. 3.2.8. Using the Developer Catalog to add services or components to your application You use the Developer Catalog to deploy applications and services based on Operator backed services such as Databases, Builder Images, and Helm Charts. The Developer Catalog contains a collection of application components, services, event sources, or source-to-image builders that you can add to your project. Cluster administrators can customize the content made available in the catalog. Procedure In the Developer perspective, navigate to the +Add view and from the Developer Catalog tile, click All Services to view all the available services in the Developer Catalog . Under All Services , select the kind of service or the component you need to add to your project. For this example, select Databases to list all the database services and then click MariaDB to see the details for the service. Click Instantiate Template to see an automatically populated template with details for the MariaDB service, and then click Create to create and view the MariaDB service in the Topology view. Figure 3.1. MariaDB in Topology 3.2.9. Additional resources For more information about Knative routing settings for OpenShift Serverless, see Routing . For more information about domain mapping settings for OpenShift Serverless, see Configuring a custom domain for a Knative service . For more information about Knative autoscaling settings for OpenShift Serverless, see Autoscaling . For more information about adding a new user to a project, see Working with projects . For more information about creating a Helm Chart repository, see Creating Helm Chart repositories . 3.3. Creating applications from installed Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. You can create applications on OpenShift Container Platform using Operators that have been installed by a cluster administrator. This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console. Additional resources See the Operators guide for more on how Operators work and how the Operator Lifecycle Manager is integrated in OpenShift Container Platform. 3.3.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an OpenShift Container Platform 4.18 cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The page allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications. 3.4. Creating applications by using the CLI You can create an OpenShift Container Platform application from components that include source or binary code, images, and templates by using the OpenShift Container Platform CLI. The set of objects created by new-app depends on the artifacts passed as input: source repositories, images, or templates. 3.4.1. Creating an application from source code With the new-app command you can create applications from source code in a local or remote Git repository. The new-app command creates a build configuration, which itself creates a new application image from your source code. The new-app command typically also creates a Deployment object to deploy the new image, and a service to provide load-balanced access to the deployment running your image. OpenShift Container Platform automatically detects whether the pipeline, source, or docker build strategy should be used, and in the case of source build, detects an appropriate language builder image. 3.4.1.1. Local To create an application from a Git repository in a local directory: USD oc new-app /<path to source code> Note If you use a local Git repository, the repository must have a remote named origin that points to a URL that is accessible by the OpenShift Container Platform cluster. If there is no recognized remote, running the new-app command will create a binary build. 3.4.1.2. Remote To create an application from a remote Git repository: USD oc new-app https://github.com/sclorg/cakephp-ex To create an application from a private remote Git repository: USD oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret Note If you use a private remote Git repository, you can use the --source-secret flag to specify an existing source clone secret that will get injected into your build config to access the repository. You can use a subdirectory of your source code repository by specifying a --context-dir flag. To create an application from a remote Git repository and a context subdirectory: USD oc new-app https://github.com/sclorg/s2i-ruby-container.git \ --context-dir=2.0/test/puma-test-app Also, when specifying a remote URL, you can specify a Git branch to use by appending #<branch_name> to the end of the URL: USD oc new-app https://github.com/openshift/ruby-hello-world.git#beta4 3.4.1.3. Build strategy detection OpenShift Container Platform automatically determines which build strategy to use by detecting certain files: If a Jenkins file exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a pipeline build strategy. Note The pipeline build strategy is deprecated; consider using Red Hat OpenShift Pipelines instead. If a Dockerfile exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a docker build strategy. If neither a Jenkins file nor a Dockerfile is detected, OpenShift Container Platform generates a source build strategy. Override the automatically detected build strategy by setting the --strategy flag to docker , pipeline , or source . USD oc new-app /home/user/code/myapp --strategy=docker Note The oc command requires that files containing build sources are available in a remote Git repository. For all source builds, you must use git remote -v . 3.4.1.4. Language detection If you use the source build strategy, new-app attempts to determine the language builder to use by the presence of certain files in the root or specified context directory of the repository: Table 3.1. Languages detected by new-app Language Files dotnet project.json , *.csproj jee pom.xml nodejs app.json , package.json perl cpanfile , index.pl php composer.json , index.php python requirements.txt , setup.py ruby Gemfile , Rakefile , config.ru scala build.sbt golang Godeps , main.go After a language is detected, new-app searches the OpenShift Container Platform server for image stream tags that have a supports annotation matching the detected language, or an image stream that matches the name of the detected language. If a match is not found, new-app searches the Docker Hub registry for an image that matches the detected language based on name. You can override the image the builder uses for a particular source repository by specifying the image, either an image stream or container specification, and the repository with a ~ as a separator. Note that if this is done, build strategy detection and language detection are not carried out. For example, to use the myproject/my-ruby imagestream with the source in a remote repository: USD oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git To use the openshift/ruby-20-centos7:latest container image stream with the source in a local repository: USD oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app Note Language detection requires the Git client to be locally installed so that your repository can be cloned and inspected. If Git is not available, you can avoid the language detection step by specifying the builder image to use with your repository with the <image>~<repository> syntax. The -i <image> <repository> invocation requires that new-app attempt to clone repository to determine what type of artifact it is, so this will fail if Git is not available. The -i <image> --code <repository> invocation requires new-app clone repository to determine whether image should be used as a builder for the source code, or deployed separately, as in the case of a database image. 3.4.2. Creating an application from an image You can deploy an application from an existing image. Images can come from image streams in the OpenShift Container Platform server, images in a specific registry, or images in the local Docker server. The new-app command attempts to determine the type of image specified in the arguments passed to it. However, you can explicitly tell new-app whether the image is a container image using the --docker-image argument or an image stream using the -i|--image-stream argument. Note If you specify an image from your local Docker repository, you must ensure that the same image is available to the OpenShift Container Platform cluster nodes. 3.4.2.1. Docker Hub MySQL image Create an application from the Docker Hub MySQL image, for example: USD oc new-app mysql 3.4.2.2. Image in a private registry Create an application using an image in a private registry, specify the full container image specification: USD oc new-app myregistry:5000/example/myimage 3.4.2.3. Existing image stream and optional image stream tag Create an application from an existing image stream and optional image stream tag: USD oc new-app my-stream:v1 3.4.3. Creating an application from a template You can create an application from a previously stored template or from a template file, by specifying the name of the template as an argument. For example, you can store a sample application template and use it to create an application. Upload an application template to your current project's template library. The following example uploads an application template from a file called examples/sample-app/application-template-stibuild.json : USD oc create -f examples/sample-app/application-template-stibuild.json Then create a new application by referencing the application template. In this example, the template name is ruby-helloworld-sample : USD oc new-app ruby-helloworld-sample To create a new application by referencing a template file in your local file system, without first storing it in OpenShift Container Platform, use the -f|--file argument. For example: USD oc new-app -f examples/sample-app/application-template-stibuild.json 3.4.3.1. Template parameters When creating an application based on a template, use the -p|--param argument to set parameter values that are defined by the template: USD oc new-app ruby-helloworld-sample \ -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword You can store your parameters in a file, then use that file with --param-file when instantiating a template. If you want to read the parameters from standard input, use --param-file=- . The following is an example file called helloworld.params : ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword Reference the parameters in the file when instantiating a template: USD oc new-app ruby-helloworld-sample --param-file=helloworld.params 3.4.4. Modifying application creation The new-app command generates OpenShift Container Platform objects that build, deploy, and run the application that is created. Normally, these objects are created in the current project and assigned names that are derived from the input source repositories or the input images. However, with new-app you can modify this behavior. Table 3.2. new-app output objects Object Description BuildConfig A BuildConfig object is created for each source repository that is specified in the command line. The BuildConfig object specifies the strategy to use, the source location, and the build output location. ImageStreams For the BuildConfig object, two image streams are usually created. One represents the input image. With source builds, this is the builder image. With Docker builds, this is the FROM image. The second one represents the output image. If a container image was specified as input to new-app , then an image stream is created for that image as well. DeploymentConfig A DeploymentConfig object is created either to deploy the output of a build, or a specified image. The new-app command creates emptyDir volumes for all Docker volumes that are specified in containers included in the resulting DeploymentConfig object . Service The new-app command attempts to detect exposed ports in input images. It uses the lowest numeric exposed port to generate a service that exposes that port. To expose a different port, after new-app has completed, simply use the oc expose command to generate additional services. Other Other objects can be generated when instantiating templates, according to the template. 3.4.4.1. Specifying environment variables When generating applications from a template, source, or an image, you can use the -e|--env argument to pass environment variables to the application container at run time: USD oc new-app openshift/postgresql-92-centos7 \ -e POSTGRESQL_USER=user \ -e POSTGRESQL_DATABASE=db \ -e POSTGRESQL_PASSWORD=password The variables can also be read from file using the --env-file argument. The following is an example file called postgresql.env : POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password Read the variables from the file: USD oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env Additionally, environment variables can be given on standard input by using --env-file=- : USD cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=- Note Any BuildConfig objects created as part of new-app processing are not updated with environment variables passed with the -e|--env or --env-file argument. 3.4.4.2. Specifying build environment variables When generating applications from a template, source, or an image, you can use the --build-env argument to pass environment variables to the build container at run time: USD oc new-app openshift/ruby-23-centos7 \ --build-env HTTP_PROXY=http://myproxy.net:1337/ \ --build-env GEM_HOME=~/.gem The variables can also be read from a file using the --build-env-file argument. The following is an example file called ruby.env : HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem Read the variables from the file: USD oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env Additionally, environment variables can be given on standard input by using --build-env-file=- : USD cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=- 3.4.4.3. Specifying labels When generating applications from source, images, or templates, you can use the -l|--label argument to add labels to the created objects. Labels make it easy to collectively select, configure, and delete objects associated with the application. USD oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world 3.4.4.4. Viewing the output without creation To see a dry-run of running the new-app command, you can use the -o|--output argument with a yaml or json value. You can then use the output to preview the objects that are created or redirect it to a file that you can edit. After you are satisfied, you can use oc create to create the OpenShift Container Platform objects. To output new-app artifacts to a file, run the following: USD oc new-app https://github.com/openshift/ruby-hello-world \ -o yaml > myapp.yaml Edit the file: USD vi myapp.yaml Create a new application by referencing the file: USD oc create -f myapp.yaml 3.4.4.5. Creating objects with different names Objects created by new-app are normally named after the source repository, or the image used to generate them. You can set the name of the objects produced by adding a --name flag to the command: USD oc new-app https://github.com/openshift/ruby-hello-world --name=myapp 3.4.4.6. Creating objects in a different project Normally, new-app creates objects in the current project. However, you can create objects in a different project by using the -n|--namespace argument: USD oc new-app https://github.com/openshift/ruby-hello-world -n myproject 3.4.4.7. Creating multiple objects The new-app command allows creating multiple applications specifying multiple parameters to new-app . Labels specified in the command line apply to all objects created by the single command. Environment variables apply to all components created from source or images. To create an application from a source repository and a Docker Hub image: USD oc new-app https://github.com/openshift/ruby-hello-world mysql Note If a source code repository and a builder image are specified as separate arguments, new-app uses the builder image as the builder for the source code repository. If this is not the intent, specify the required builder image for the source using the ~ separator. 3.4.4.8. Grouping images and source in a single pod The new-app command allows deploying multiple images together in a single pod. To specify which images to group together, use the + separator. The --group command line argument can also be used to specify the images that should be grouped together. To group the image built from a source repository with other images, specify its builder image in the group: USD oc new-app ruby+mysql To deploy an image built from source and an external image together: USD oc new-app \ ruby~https://github.com/openshift/ruby-hello-world \ mysql \ --group=ruby+mysql 3.4.4.9. Searching for images, templates, and other inputs To search for images, templates, and other inputs for the oc new-app command, add the --search and --list flags. For example, to find all of the images or templates that include PHP: USD oc new-app --search php 3.4.4.10. Setting the import mode To set the import mode when using oc new-app , add the --import-mode flag. This flag can be appended with Legacy or PreserveOriginal , which provides users the option to create image streams using a single sub-manifest, or all manifests, respectively. USD oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test USD oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test 3.5. Creating applications using Ruby on Rails Ruby on Rails is a web framework written in Ruby. This guide covers using Rails 4 on OpenShift Container Platform. Warning Go through the whole tutorial to have an overview of all the steps necessary to run your application on the OpenShift Container Platform. If you experience a problem try reading through the entire tutorial and then going back to your issue. It can also be useful to review your steps to ensure that all the steps were run correctly. 3.5.1. Prerequisites Basic Ruby and Rails knowledge. Locally installed version of Ruby 2.0.0+, Rubygems, Bundler. Basic Git knowledge. Running instance of OpenShift Container Platform 4. Make sure that an instance of OpenShift Container Platform is running and is available. Also make sure that your oc CLI client is installed and the command is accessible from your command shell, so you can use it to log in using your email address and password. 3.5.2. Setting up the database Rails applications are almost always used with a database. For local development use the PostgreSQL database. Procedure Install the database: USD sudo yum install -y postgresql postgresql-server postgresql-devel Initialize the database: USD sudo postgresql-setup initdb This command creates the /var/lib/pgsql/data directory, in which the data is stored. Start the database: USD sudo systemctl start postgresql.service When the database is running, create your rails user: USD sudo -u postgres createuser -s rails Note that the user created has no password. 3.5.3. Writing your application If you are starting your Rails application from scratch, you must install the Rails gem first. Then you can proceed with writing your application. Procedure Install the Rails gem: USD gem install rails Example output Successfully installed rails-4.3.0 1 gem installed After you install the Rails gem, create a new application with PostgreSQL as your database: USD rails new rails-app --database=postgresql Change into your new application directory: USD cd rails-app If you already have an application, make sure the pg (postgresql) gem is present in your Gemfile . If not, edit your Gemfile by adding the gem: gem 'pg' Generate a new Gemfile.lock with all your dependencies: USD bundle install In addition to using the postgresql database with the pg gem, you also must ensure that the config/database.yml is using the postgresql adapter. Make sure you updated default section in the config/database.yml file, so it looks like this: default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password> Create your application's development and test databases: USD rake db:create This creates development and test database in your PostgreSQL server. 3.5.3.1. Creating a welcome page Since Rails 4 no longer serves a static public/index.html page in production, you must create a new root page. To have a custom welcome page must do following steps: Create a controller with an index action. Create a view page for the welcome controller index action. Create a route that serves applications root page with the created controller and view. Rails offers a generator that completes all necessary steps for you. Procedure Run Rails generator: USD rails generate controller welcome index All the necessary files are created. edit line 2 in config/routes.rb file as follows: Run the rails server to verify the page is available: USD rails server You should see your page by visiting http://localhost:3000 in your browser. If you do not see the page, check the logs that are output to your server to debug. 3.5.3.2. Configuring application for OpenShift Container Platform To have your application communicate with the PostgreSQL database service running in OpenShift Container Platform you must edit the default section in your config/database.yml to use environment variables, which you must define later, upon the database service creation. Procedure Edit the default section in your config/database.yml with pre-defined variables as follows: Sample config/database YAML file <% user = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? "root" : ENV["POSTGRESQL_USER"] %> <% password = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? ENV["POSTGRESQL_ADMIN_PASSWORD"] : ENV["POSTGRESQL_PASSWORD"] %> <% db_service = ENV.fetch("DATABASE_SERVICE_NAME","").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV["POSTGRESQL_MAX_CONNECTIONS"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV["#{db_service}_SERVICE_HOST"] %> port: <%= ENV["#{db_service}_SERVICE_PORT"] %> database: <%= ENV["POSTGRESQL_DATABASE"] %> 3.5.3.3. Storing your application in Git Building an application in OpenShift Container Platform usually requires that the source code be stored in a git repository, so you must install git if you do not already have it. Prerequisites Install git. Procedure Make sure you are in your Rails application directory by running the ls -1 command. The output of the command should look like: USD ls -1 Example output app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor Run the following commands in your Rails app directory to initialize and commit your code to git: USD git init USD git add . USD git commit -m "initial commit" After your application is committed you must push it to a remote repository. GitHub account, in which you create a new repository. Set the remote that points to your git repository: USD git remote add origin [email protected]:<namespace/repository-name>.git Push your application to your remote git repository. USD git push 3.5.4. Deploying your application to OpenShift Container Platform You can deploy you application to OpenShift Container Platform. After creating the rails-app project, you are automatically switched to the new project namespace. Deploying your application in OpenShift Container Platform involves three steps: Creating a database service from OpenShift Container Platform's PostgreSQL image. Creating a frontend service from OpenShift Container Platform's Ruby 2.0 builder image and your Ruby on Rails source code, which are wired with the database service. Creating a route for your application. Procedure To deploy your Ruby on Rails application, create a new project for the application: USD oc new-project rails-app --description="My Rails application" --display-name="Rails Application" 3.5.4.1. Creating the database service Your Rails application expects a running database service. For this service use PostgreSQL database image. To create the database service, use the oc new-app command. To this command you must pass some necessary environment variables which are used inside the database container. These environment variables are required to set the username, password, and name of the database. You can change the values of these environment variables to anything you would like. The variables are as follows: POSTGRESQL_DATABASE POSTGRESQL_USER POSTGRESQL_PASSWORD Setting these variables ensures: A database exists with the specified name. A user exists with the specified name. The user can access the specified database with the specified password. Procedure Create the database service: USD oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password To also set the password for the database administrator, append to the command with: -e POSTGRESQL_ADMIN_PASSWORD=admin_pw Watch the progress: USD oc get pods --watch 3.5.4.2. Creating the frontend service To bring your application to OpenShift Container Platform, you must specify a repository in which your application lives. Procedure Create the frontend service and specify database related environment variables that were setup when creating the database service: USD oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql With this command, OpenShift Container Platform fetches the source code, sets up the builder, builds your application image, and deploys the newly created image together with the specified environment variables. The application is named rails-app . Verify the environment variables have been added by viewing the JSON document of the rails-app deployment config: USD oc get dc rails-app -o json You should see the following section: Example output env": [ { "name": "POSTGRESQL_USER", "value": "username" }, { "name": "POSTGRESQL_PASSWORD", "value": "password" }, { "name": "POSTGRESQL_DATABASE", "value": "db_name" }, { "name": "DATABASE_SERVICE_NAME", "value": "postgresql" } ], Check the build process: USD oc logs -f build/rails-app-1 After the build is complete, look at the running pods in OpenShift Container Platform: USD oc get pods You should see a line starting with myapp-<number>-<hash> , and that is your application running in OpenShift Container Platform. Before your application is functional, you must initialize the database by running the database migration script. There are two ways you can do this: Manually from the running frontend container: Exec into frontend container with rsh command: USD oc rsh <frontend_pod_id> Run the migration from inside the container: USD RAILS_ENV=production bundle exec rake db:migrate If you are running your Rails application in a development or test environment you do not have to specify the RAILS_ENV environment variable. By adding pre-deployment lifecycle hooks in your template. 3.5.4.3. Creating a route for your application You can expose a service to create a route for your application. Procedure To expose a service by giving it an externally-reachable hostname like www.example.com use OpenShift Container Platform route. In your case you need to expose the frontend service by typing: USD oc expose service rails-app --hostname=www.example.com Warning Ensure the hostname you specify resolves into the IP address of the router. | [
"oc create -f <filename>",
"oc create -f <filename> -n <project>",
"kind: \"ImageStream\" apiVersion: \"image.openshift.io/v1\" metadata: name: \"ruby\" creationTimestamp: null spec: tags: - name: \"2.6\" annotations: description: \"Build and run Ruby 2.6 applications\" iconClass: \"icon-ruby\" tags: \"builder,ruby\" 1 supports: \"ruby:2.6,ruby\" version: \"2.6\"",
"oc process -f <filename> -l name=otherLabel",
"oc process --parameters -f <filename>",
"oc process --parameters -n <project> <template_name>",
"oc process --parameters -n openshift rails-postgresql-example",
"NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB",
"oc process -f <filename>",
"oc process <template_name>",
"oc process -f <filename> | oc create -f -",
"oc process <template> | oc create -f -",
"oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase",
"oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase | oc create -f -",
"cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase",
"oc process -f my-rails-postgresql --param-file=postgres.env",
"sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=-",
"oc edit template <template>",
"oc get templates -n openshift",
"apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: \"Description\" iconClass: \"icon-redis\" tags: \"database,nosql\" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: USD{REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: \"CakePHP MySQL Example (Ephemeral)\" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing.\" 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: \"quickstart,php,cakephp\" 5 iconClass: icon-php 6 openshift.io/provider-display-name: \"Red Hat, Inc.\" 7 openshift.io/documentation-url: \"https://github.com/sclorg/cakephp-ex\" 8 openshift.io/support-url: \"https://access.redhat.com\" 9 message: \"Your admin credentials are USD{ADMIN_USERNAME}:USD{ADMIN_PASSWORD}\" 10",
"kind: \"Template\" apiVersion: \"v1\" labels: template: \"cakephp-mysql-example\" 1 app: \"USD{NAME}\" 2",
"parameters: - name: USERNAME description: \"The user name for Joe\" value: joe",
"parameters: - name: PASSWORD description: \"The random user password\" generate: expression from: \"[a-zA-Z0-9]{12}\"",
"parameters: - name: singlequoted_example generate: expression from: '[\\A]{10}' - name: doublequoted_example generate: expression from: \"[\\\\A]{10}\"",
"{ \"parameters\": [ { \"name\": \"json_example\", \"generate\": \"expression\", \"from\": \"[\\\\A]{10}\" } ] }",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: \"USD{SOURCE_REPOSITORY_URL}\" 1 ref: \"USD{SOURCE_REPOSITORY_REF}\" contextDir: \"USD{CONTEXT_DIR}\" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: \"USD{{REPLICA_COUNT}}\" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: \"[a-zA-Z0-9]{40}\" 9 - name: REPLICA_COUNT description: Number of replicas to run value: \"2\" required: true message: \"... The GitHub webhook secret is USD{GITHUB_WEBHOOK_SECRET} ...\" 10",
"kind: \"Template\" apiVersion: \"v1\" metadata: name: my-template objects: - kind: \"Service\" 1 apiVersion: \"v1\" metadata: name: \"cakephp-mysql-example\" annotations: description: \"Exposes and load balances the application pods\" spec: ports: - name: \"web\" port: 8080 targetPort: 8080 selector: name: \"cakephp-mysql-example\"",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: \"{.data['my\\\\.username']}\" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: \"{.data['password']}\" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: \"{.spec.clusterIP}:{.spec.ports[?(.name==\\\"web\\\")].port}\" spec: ports: - name: \"web\" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: \"http://{.spec.host}{.spec.path}\" spec: path: mypath",
"{ \"credentials\": { \"username\": \"foo\", \"password\": \"YmFy\", \"service_ip_port\": \"172.30.12.34:8080\", \"uri\": \"http://route-test.router.default.svc.cluster.local/mypath\" } }",
"\"template.alpha.openshift.io/wait-for-ready\": \"true\"",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: annotations: template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: Service apiVersion: v1 metadata: name: spec:",
"oc get -o yaml all > <yaml_filename>",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc new-app /<path to source code>",
"oc new-app https://github.com/sclorg/cakephp-ex",
"oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret",
"oc new-app https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.0/test/puma-test-app",
"oc new-app https://github.com/openshift/ruby-hello-world.git#beta4",
"oc new-app /home/user/code/myapp --strategy=docker",
"oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git",
"oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app",
"oc new-app mysql",
"oc new-app myregistry:5000/example/myimage",
"oc new-app my-stream:v1",
"oc create -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample",
"oc new-app -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword",
"ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword",
"oc new-app ruby-helloworld-sample --param-file=helloworld.params",
"oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password",
"POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password",
"oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env",
"cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-",
"oc new-app openshift/ruby-23-centos7 --build-env HTTP_PROXY=http://myproxy.net:1337/ --build-env GEM_HOME=~/.gem",
"HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem",
"oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env",
"cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-",
"oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world",
"oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml",
"vi myapp.yaml",
"oc create -f myapp.yaml",
"oc new-app https://github.com/openshift/ruby-hello-world --name=myapp",
"oc new-app https://github.com/openshift/ruby-hello-world -n myproject",
"oc new-app https://github.com/openshift/ruby-hello-world mysql",
"oc new-app ruby+mysql",
"oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql",
"oc new-app --search php",
"oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test",
"oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test",
"sudo yum install -y postgresql postgresql-server postgresql-devel",
"sudo postgresql-setup initdb",
"sudo systemctl start postgresql.service",
"sudo -u postgres createuser -s rails",
"gem install rails",
"Successfully installed rails-4.3.0 1 gem installed",
"rails new rails-app --database=postgresql",
"cd rails-app",
"gem 'pg'",
"bundle install",
"default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password>",
"rake db:create",
"rails generate controller welcome index",
"root 'welcome#index'",
"rails server",
"<% user = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? \"root\" : ENV[\"POSTGRESQL_USER\"] %> <% password = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? ENV[\"POSTGRESQL_ADMIN_PASSWORD\"] : ENV[\"POSTGRESQL_PASSWORD\"] %> <% db_service = ENV.fetch(\"DATABASE_SERVICE_NAME\",\"\").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV[\"POSTGRESQL_MAX_CONNECTIONS\"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV[\"#{db_service}_SERVICE_HOST\"] %> port: <%= ENV[\"#{db_service}_SERVICE_PORT\"] %> database: <%= ENV[\"POSTGRESQL_DATABASE\"] %>",
"ls -1",
"app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor",
"git init",
"git add .",
"git commit -m \"initial commit\"",
"git remote add origin [email protected]:<namespace/repository-name>.git",
"git push",
"oc new-project rails-app --description=\"My Rails application\" --display-name=\"Rails Application\"",
"oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password",
"-e POSTGRESQL_ADMIN_PASSWORD=admin_pw",
"oc get pods --watch",
"oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql",
"oc get dc rails-app -o json",
"env\": [ { \"name\": \"POSTGRESQL_USER\", \"value\": \"username\" }, { \"name\": \"POSTGRESQL_PASSWORD\", \"value\": \"password\" }, { \"name\": \"POSTGRESQL_DATABASE\", \"value\": \"db_name\" }, { \"name\": \"DATABASE_SERVICE_NAME\", \"value\": \"postgresql\" } ],",
"oc logs -f build/rails-app-1",
"oc get pods",
"oc rsh <frontend_pod_id>",
"RAILS_ENV=production bundle exec rake db:migrate",
"oc expose service rails-app --hostname=www.example.com"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/building_applications/creating-applications |
Chapter 4. Advisories related to this release | Chapter 4. Advisories related to this release The following advisories have been issued to bugfixes and CVE fixes included in this release. RHSA-2020:4316 RHSA-2020:4305 RHSA-2020:4306 RHSA-2020:4307 Revised on 2024-05-09 16:47:57 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.9/rn-openjdk-advisory |
Part III. Installing and Managing Software | Part III. Installing and Managing Software All software on a Red Hat Enterprise Linux system is divided into RPM packages, which can be installed, upgraded, or removed. This part describes how to manage packages on Red Hat Enterprise Linux using Yum . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/part-installing_and_managing_software |
Server Administration Guide | Server Administration Guide Red Hat Single Sign-On 7.6 For Use with Red Hat Single Sign-On 7.6 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_administration_guide/index |
Chapter 2. Projects | Chapter 2. Projects 2.1. Working with projects A project allows a community of users to organize and manage their content in isolation from other communities. Note Projects starting with openshift- and kube- are default projects . These projects host cluster components that run as pods and other infrastructure components. As such, OpenShift Container Platform does not allow you to create projects starting with openshift- or kube- using the oc new-project command. Cluster administrators can create these projects using the oc adm new-project command. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. 2.1.1. Creating a project You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to create a project in your cluster. 2.1.1.1. Creating a project by using the web console You can use the OpenShift Container Platform web console to create a project in your cluster. Note Projects starting with openshift- and kube- are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create projects starting with openshift- using the web console. Prerequisites Ensure that you have the appropriate roles and permissions to create projects, applications, and other workloads in OpenShift Container Platform. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Click Create Project : In the Create Project dialog box, enter a unique name, such as myproject , in the Name field. Optional: Add the Display name and Description details for the project. Click Create . The dashboard for your project is displayed. Optional: Select the Details tab to view the project details. Optional: If you have adequate permissions for a project, you can use the Project Access tab to provide or revoke admin, edit, and view privileges for the project. If you are using the Developer perspective: Click the Project menu and select Create Project : Figure 2.1. Create project In the Create Project dialog box, enter a unique name, such as myproject , in the Name field. Optional: Add the Display name and Description details for the project. Click Create . Optional: Use the left navigation panel to navigate to the Project view and see the dashboard for your project. Optional: In the project dashboard, select the Details tab to view the project details. Optional: If you have adequate permissions for a project, you can use the Project Access tab of the project dashboard to provide or revoke admin, edit, and view privileges for the project. Additional resources Customizing the available cluster roles using the web console 2.1.1.2. Creating a project by using the CLI If allowed by your cluster administrator, you can create a new project. Note Projects starting with openshift- and kube- are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create Projects starting with openshift- or kube- using the oc new-project command. Cluster administrators can create these projects using the oc adm new-project command. Procedure Run: USD oc new-project <project_name> \ --description="<description>" --display-name="<display_name>" For example: USD oc new-project hello-openshift \ --description="This is an example project" \ --display-name="Hello OpenShift" Note The number of projects you are allowed to create might be limited by the system administrator. After your limit is reached, you might have to delete an existing project in order to create a new one. 2.1.2. Viewing a project You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to view a project in your cluster. 2.1.2.1. Viewing a project by using the web console You can view the projects that you have access to by using the OpenShift Container Platform web console. Procedure If you are using the Administrator perspective: Navigate to Home Projects in the navigation menu. Select a project to view. The Overview tab includes a dashboard for your project. Select the Details tab to view the project details. Select the YAML tab to view and update the YAML configuration for the project resource. Select the Workloads tab to see workloads in the project. Select the RoleBindings tab to view and create role bindings for your project. If you are using the Developer perspective: Navigate to the Project page in the navigation menu. Select All Projects from the Project drop-down menu at the top of the screen to list all of the projects in your cluster. Select a project to view. The Overview tab includes a dashboard for your project. Select the Details tab to view the project details. If you have adequate permissions for a project, select the Project access tab view and update the privileges for the project. 2.1.2.2. Viewing a project using the CLI When viewing projects, you are restricted to seeing only the projects you have access to view based on the authorization policy. Procedure To view a list of projects, run: USD oc get projects You can change from the current project to a different project for CLI operations. The specified project is then used in all subsequent operations that manipulate project-scoped content: USD oc project <project_name> 2.1.3. Providing access permissions to your project using the Developer perspective You can use the Project view in the Developer perspective to grant or revoke access permissions to your project. Prerequisites You have created a project. Procedure To add users to your project and provide Admin , Edit , or View access to them: In the Developer perspective, navigate to the Project page. Select your project from the Project menu. Select the Project Access tab. Click Add access to add a new row of permissions to the default ones. Figure 2.2. Project permissions Enter the user name, click the Select a role drop-down list, and select an appropriate role. Click Save to add the new permissions. You can also use: The Select a role drop-down list, to modify the access permissions of an existing user. The Remove Access icon, to completely remove the access permissions of an existing user to the project. Note Advanced role-based access control is managed in the Roles and Roles Binding views in the Administrator perspective. 2.1.4. Customizing the available cluster roles using the web console In the Developer perspective of the web console, the Project Project access page enables a project administrator to grant roles to users in a project. By default, the available cluster roles that can be granted to users in a project are admin, edit, and view. As a cluster administrator, you can define which cluster roles are available in the Project access page for all projects cluster-wide. You can specify the available roles by customizing the spec.customization.projectAccess.availableClusterRoles object in the Console configuration resource. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Administration Cluster settings . Click the Configuration tab. From the Configuration resource list, select Console operator.openshift.io . Navigate to the YAML tab to view and edit the YAML code. In the YAML code under spec , customize the list of available cluster roles for project access. The following example specifies the default admin , edit , and view roles: apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster # ... spec: customization: projectAccess: availableClusterRoles: - admin - edit - view Click Save to save the changes to the Console configuration resource. Verification In the Developer perspective, navigate to the Project page. Select a project from the Project menu. Select the Project access tab. Click the menu in the Role column and verify that the available roles match the configuration that you applied to the Console resource configuration. 2.1.5. Adding to a project You can add items to your project by using the +Add page in the Developer perspective. Prerequisites You have created a project. Procedure In the Developer perspective, navigate to the +Add page. Select your project from the Project menu. Click on an item on the +Add page and then follow the workflow. Note You can also use the search feature in the Add* page to find additional items to add to your project. Click * under Add at the top of the page and type the name of a component in the search field. 2.1.6. Checking the project status You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to view the status of your project. 2.1.6.1. Checking project status by using the web console You can review the status of your project by using the web console. Prerequisites You have created a project. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Select a project from the list. Review the project status in the Overview page. If you are using the Developer perspective: Navigate to the Project page. Select a project from the Project menu. Review the project status in the Overview page. 2.1.6.2. Checking project status by using the CLI You can review the status of your project by using the OpenShift CLI ( oc ). Prerequisites You have installed the OpenShift CLI ( oc ). You have created a project. Procedure Switch to your project: USD oc project <project_name> 1 1 Replace <project_name> with the name of your project. Obtain a high-level overview of the project: USD oc status 2.1.7. Deleting a project You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to delete a project. When you delete a project, the server updates the project status to Terminating from Active . Then, the server clears all content from a project that is in the Terminating state before finally removing the project. While a project is in Terminating status, you cannot add new content to the project. Projects can be deleted from the CLI or the web console. 2.1.7.1. Deleting a project by using the web console You can delete a project by using the web console. Prerequisites You have created a project. You have the required permissions to delete the project. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Select a project from the list. Click the Actions drop-down menu for the project and select Delete Project . Note The Delete Project option is not available if you do not have the required permissions to delete the project. In the Delete Project? pane, confirm the deletion by entering the name of your project. Click Delete . If you are using the Developer perspective: Navigate to the Project page. Select the project that you want to delete from the Project menu. Click the Actions drop-down menu for the project and select Delete Project . Note If you do not have the required permissions to delete the project, the Delete Project option is not available. In the Delete Project? pane, confirm the deletion by entering the name of your project. Click Delete . 2.1.7.2. Deleting a project by using the CLI You can delete a project by using the OpenShift CLI ( oc ). Prerequisites You have installed the OpenShift CLI ( oc ). You have created a project. You have the required permissions to delete the project. Procedure Delete your project: USD oc delete project <project_name> 1 1 Replace <project_name> with the name of the project that you want to delete. 2.2. Creating a project as another user Impersonation allows you to create a project as a different user. 2.2.1. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 2.2.2. Impersonating a user when you create a project You can impersonate a different user when you create a project request. Because system:authenticated:oauth is the only bootstrap group that can create project requests, you must impersonate that group. Procedure To create a project request on behalf of a different user: USD oc new-project <project> --as=<user> \ --as-group=system:authenticated --as-group=system:authenticated:oauth 2.3. Configuring project creation In OpenShift Container Platform, projects are used to group and isolate related objects. When a request is made to create a new project using the web console or oc new-project command, an endpoint in OpenShift Container Platform is used to provision the project according to a template, which can be customized. As a cluster administrator, you can allow and configure how developers and service accounts can create, or self-provision , their own projects. 2.3.1. About project creation The OpenShift Container Platform API server automatically provisions new projects based on the project template that is identified by the projectRequestTemplate parameter in the cluster's project configuration resource. If the parameter is not defined, the API server creates a default template that creates a project with the requested name, and assigns the requesting user to the admin role for that project. When a project request is submitted, the API substitutes the following parameters into the template: Table 2.1. Default project template parameters Parameter Description PROJECT_NAME The name of the project. Required. PROJECT_DISPLAYNAME The display name of the project. May be empty. PROJECT_DESCRIPTION The description of the project. May be empty. PROJECT_ADMIN_USER The user name of the administrating user. PROJECT_REQUESTING_USER The user name of the requesting user. Access to the API is granted to developers with the self-provisioner role and the self-provisioners cluster role binding. This role is available to all authenticated developers by default. 2.3.2. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: <template_name> # ... After you save your changes, create a new project to verify that your changes were successfully applied. 2.3.3. Disabling project self-provisioning You can prevent an authenticated user group from self-provisioning new projects. Procedure Log in as a user with cluster-admin privileges. View the self-provisioners cluster role binding usage by running the following command: USD oc describe clusterrolebinding.rbac self-provisioners Example output Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth Review the subjects in the self-provisioners section. Remove the self-provisioner cluster role from the group system:authenticated:oauth . If the self-provisioners cluster role binding binds only the self-provisioner role to the system:authenticated:oauth group, run the following command: USD oc patch clusterrolebinding.rbac self-provisioners -p '{"subjects": null}' If the self-provisioners cluster role binding binds the self-provisioner role to more users, groups, or service accounts than the system:authenticated:oauth group, run the following command: USD oc adm policy \ remove-cluster-role-from-group self-provisioner \ system:authenticated:oauth Edit the self-provisioners cluster role binding to prevent automatic updates to the role. Automatic updates reset the cluster roles to the default state. To update the role binding using the CLI: Run the following command: USD oc edit clusterrolebinding.rbac self-provisioners In the displayed role binding, set the rbac.authorization.kubernetes.io/autoupdate parameter value to false , as shown in the following example: apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "false" # ... To update the role binding by using a single command: USD oc patch clusterrolebinding.rbac self-provisioners -p '{ "metadata": { "annotations": { "rbac.authorization.kubernetes.io/autoupdate": "false" } } }' Log in as an authenticated user and verify that it can no longer self-provision a project: USD oc new-project test Example output Error from server (Forbidden): You may not request a new project via this API. Consider customizing this project request message to provide more helpful instructions specific to your organization. 2.3.4. Customizing the project request message When a developer or a service account that is unable to self-provision projects makes a project creation request using the web console or CLI, the following error message is returned by default: You may not request a new project via this API. Cluster administrators can customize this message. Consider updating it to provide further instructions on how to request a new project specific to your organization. For example: To request a project, contact your system administrator at [email protected] . To request a new project, fill out the project request form located at https://internal.example.com/openshift-project-request . To customize the project request message: Procedure Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Log in as a user with cluster-admin privileges. Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestMessage parameter and set the value to your custom message: Project configuration resource with custom project request message apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestMessage: <message_string> # ... For example: apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestMessage: To request a project, contact your system administrator at [email protected]. # ... After you save your changes, attempt to create a new project as a developer or service account that is unable to self-provision projects to verify that your changes were successfully applied. | [
"oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"",
"oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"",
"oc get projects",
"oc project <project_name>",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: projectAccess: availableClusterRoles: - admin - edit - view",
"oc project <project_name> 1",
"oc status",
"oc delete project <project_name> 1",
"oc new-project <project> --as=<user> --as-group=system:authenticated --as-group=system:authenticated:oauth",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc describe clusterrolebinding.rbac self-provisioners",
"Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth",
"oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'",
"oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth",
"oc edit clusterrolebinding.rbac self-provisioners",
"apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"",
"oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'",
"oc new-project test",
"Error from server (Forbidden): You may not request a new project via this API.",
"You may not request a new project via this API.",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected]."
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/building_applications/projects |
Chapter 99. KafkaUser schema reference | Chapter 99. KafkaUser schema reference Property Description spec The specification of the user. KafkaUserSpec status The status of the Kafka User. KafkaUserStatus | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaUser-reference |
9.7. Setting the Hostname | 9.7. Setting the Hostname Setup prompts you to supply a host name for this computer, either as a fully-qualified domain name (FQDN) in the format hostname . domainname or as a short host name in the format hostname . Many networks have a Dynamic Host Configuration Protocol (DHCP) service that automatically supplies connected systems with a domain name. To allow the DHCP service to assign the domain name to this machine, specify the short host name only. Note You may give your system any name provided that the full hostname is unique. The hostname may include letters, numbers and hyphens. Figure 9.24. Setting the hostname If your Red Hat Enterprise Linux system is connected directly to the Internet, you must pay attention to additional considerations to avoid service interruptions or risk action by your upstream service provider. A full discussion of these issues is beyond the scope of this document. Note The installation program does not configure modems. Configure these devices after installation with the Network utility. The settings for your modem are specific to your particular Internet Service Provider (ISP). 9.7.1. Editing Network Connections Important When a Red Hat Enterprise Linux 6.9 installation boots for the first time, it activates any network interfaces that you configured during the installation process. However, the installer does not prompt you to configure network interfaces on some common installation paths, for example, when you install Red Hat Enterprise Linux from a DVD to a local hard drive. When you install Red Hat Enterprise Linux from a local installation source to a local storage device, be sure to configure at least one network interface manually if you require network access when the system boots for the first time. You will need to select the Connect automatically option manually when editing the connection. Note To change your network configuration after you have completed the installation, use the Network Administration Tool . Type the system-config-network command in a shell prompt to launch the Network Administration Tool . If you are not root, it prompts you for the root password to continue. The Network Administration Tool is now deprecated and will be replaced by NetworkManager during the lifetime of Red Hat Enterprise Linux 6. To configure a network connection manually, click the button Configure Network . The Network Connections dialog appears that allows you to configure wired, wireless, mobile broadband, InfiniBand, VPN, DSL, VLAN, and bonded connections for the system using the NetworkManager tool. A full description of all configurations possible with NetworkManager is beyond the scope of this guide. This section only details the most typical scenario of how to configure wired connections during installation. Configuration of other types of network is broadly similar, although the specific parameters that you must configure are necessarily different. Figure 9.25. Network Connections To add a new connection, click Add and select a connection type from the menu. To modify an existing connection, select it in the list and click Edit . In either case, a dialog box appears with a set of tabs that is appropriate to the particular connection type, as described below. To remove a connection, select it in the list and click Delete . When you have finished editing network settings, click Apply to save the new configuration. If you reconfigured a device that was already active during installation, you must restart the device to use the new configuration - refer to Section 9.7.1.6, "Restart a network device" . 9.7.1.1. Options common to all types of connection Certain configuration options are common to all connection types. Specify a name for the connection in the Connection name name field. Select Connect automatically to start the connection automatically when the system boots. When NetworkManager runs on an installed system, the Available to all users option controls whether a network configuration is available system-wide or not. During installation, ensure that Available to all users remains selected for any network interface that you configure. 9.7.1.2. The Wired tab Use the Wired tab to specify or change the media access control (MAC) address for the network adapter, and either set the maximum transmission unit (MTU, in bytes) that can pass through the interface. Figure 9.26. The Wired tab 9.7.1.3. The 802.1x Security tab Use the 802.1x Security tab to configure 802.1X port-based network access control (PNAC). Select Use 802.1X security for this connection to enable access control, then specify details of your network. The configuration options include: Authentication Choose one of the following methods of authentication: TLS for Transport Layer Security Tunneled TLS for Tunneled Transport Layer Security , otherwise known as TTLS, or EAP-TTLS Protected EAP (PEAP) for Protected Extensible Authentication Protocol Identity Provide the identity of this server. User certificate Browse to a personal X.509 certificate file encoded with Distinguished Encoding Rules (DER) or Privacy Enhanced Mail (PEM). CA certificate Browse to a X.509 certificate authority certificate file encoded with Distinguished Encoding Rules (DER) or Privacy Enhanced Mail (PEM). Private key Browse to a private key file encoded with Distinguished Encoding Rules (DER), Privacy Enhanced Mail (PEM), or the Personal Information Exchange Syntax Standard (PKCS#12). Private key password The password for the private key specified in the Private key field. Select Show password to make the password visible as you type it. Figure 9.27. The 802.1x Security tab 9.7.1.4. The IPv4 Settings tab Use the IPv4 Settings tab tab to configure the IPv4 parameters for the previously selected network connection. Use the Method drop-down menu to specify which settings the system should attempt to obtain from a Dynamic Host Configuration Protocol (DHCP) service running on the network. Choose from the following options: Automatic (DHCP) IPv4 parameters are configured by the DHCP service on the network. Automatic (DHCP) addresses only The IPv4 address, netmask, and gateway address are configured by the DHCP service on the network, but DNS servers and search domains must be configured manually. Manual IPv4 parameters are configured manually for a static configuration. Link-Local Only A link-local address in the 169.254/16 range is assigned to the interface. Shared to other computers The system is configured to provide network access to other computers. The interface is assigned an address in the 10.42.x.1/24 range, a DHCP server and DNS server are started, and the interface is connected to the default network connection on the system with network address translation (NAT). Disabled IPv4 is disabled for this connection. If you selected a method that requires you to supply manual parameters, enter details of the IP address for this interface, the netmask, and the gateway in the Addresses field. Use the Add and Delete buttons to add or remove addresses. Enter a comma-separated list of DNS servers in the DNS servers field, and a comma-separated list of domains in the Search domains field for any domains that you want to include in name server lookups. Optionally, enter a name for this network connection in the DHCP client ID field. This name must be unique on the subnet. When you assign a meaningful DHCP client ID to a connection, it is easy to identify this connection when troubleshooting network problems. Deselect the Require IPv4 addressing for this connection to complete check box to allow the system to make this connection on an IPv6-enabled network if IPv4 configuration fails but IPv6 configuration succeeds. Figure 9.28. The IPv4 Settings tab 9.7.1.4.1. Editing IPv4 routes Red Hat Enterprise Linux configures a number of routes automatically based on the IP addresses of a device. To edit additional routes, click the Routes button. The Editing IPv4 routes dialog appears. Figure 9.29. The Editing IPv4 Routes dialog Click Add to add the IP address, netmask, gateway address, and metric for a new static route. Select Ignore automatically obtained routes to make the interface use only the routes specified for it here. Select Use this connection only for resources on its network to restrict connections only to the local network. 9.7.1.5. The IPv6 Settings tab Use the IPv6 Settings tab tab to configure the IPv6 parameters for the previously selected network connection. Use the Method drop-down menu to specify which settings the system should attempt to obtain from a Dynamic Host Configuration Protocol (DHCP) service running on the network. Choose from the following options: Ignore IPv6 is ignored for this connection. Automatic NetworkManager uses router advertisement (RA) to create an automatic, stateless configuration. Automatic, addresses only NetworkManager uses RA to create an automatic, stateless configuration, but DNS servers and search domains are ignored and must be configured manually. Automatic, DHCP only NetworkManager does not use RA, but requests information from DHCPv6 directly to create a stateful configuration. Manual IPv6 parameters are configured manually for a static configuration. Link-Local Only A link-local address with the fe80::/10 prefix is assigned to the interface. If you selected a method that requires you to supply manual parameters, enter details of the IP address for this interface, the netmask, and the gateway in the Addresses field. Use the Add and Delete buttons to add or remove addresses. Enter a comma-separated list of DNS servers in the DNS servers field, and a comma-separated list of domains in the Search domains field for any domains that you want to include in name server lookups. Optionally, enter a name for this network connection in the DHCP client ID field. This name must be unique on the subnet. When you assign a meaningful DHCP client ID to a connection, it is easy to identify this connection when troubleshooting network problems. Deselect the Require IPv6 addressing for this connection to complete check box to allow the system to make this connection on an IPv4-enabled network if IPv6 configuration fails but IPv4 configuration succeeds. Figure 9.30. The IPv6 Settings tab 9.7.1.5.1. Editing IPv6 routes Red Hat Enterprise Linux configures a number of routes automatically based on the IP addresses of a device. To edit additional routes, click the Routes button. The Editing IPv6 routes dialog appears. Figure 9.31. The Editing IPv6 Routes dialog Click Add to add the IP address, netmask, gateway address, and metric for a new static route. Select Use this connection only for resources on its network to restrict connections only to the local network. 9.7.1.6. Restart a network device If you reconfigured a network that was already in use during installation, you must disconnect and reconnect the device in anaconda for the changes to take effect. Anaconda uses interface configuration (ifcfg) files to communicate with NetworkManager . A device becomes disconnected when its ifcfg file is removed, and becomes reconnected when its ifcfg file is restored, as long as ONBOOT=yes is set. Refer to the Red Hat Enterprise Linux 6.9 Deployment Guide available from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/index.html for more information about interface configuration files. Press Ctrl + Alt + F2 to switch to virtual terminal tty2 . Move the interface configuration file to a temporary location: where device_name is the device that you just reconfigured. For example, ifcfg-eth0 is the ifcfg file for eth0 . The device is now disconnected in anaconda . Open the interface configuration file in the vi editor: Verify that the interface configuration file contains the line ONBOOT=yes . If the file does not already contain the line, add it now and save the file. Exit the vi editor. Move the interface configuration file back to the /etc/sysconfig/network-scripts/ directory: The device is now reconnected in anaconda . Press Ctrl + Alt + F6 to return to anaconda . | [
"mv /etc/sysconfig/network-scripts/ifcfg- device_name /tmp",
"vi /tmp/ifcfg- device_name",
"mv /tmp/ifcfg- device_name /etc/sysconfig/network-scripts/"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-netconfig-x86 |
Chapter 6. Optimizing MTR performance | Chapter 6. Optimizing MTR performance MTR performance depends on a number of factors, including hardware configuration, the number and types of files in the application, the size and number of applications to be evaluated, and whether the application contains source or compiled code. For example, a file that is larger than 10 MB may need a lot of time to process. In general, MTR spends about 40% of the time decompiling classes, 40% of the time executing rules, and the remainder of the time processing other tasks and generating reports. This section describes what you can do to improve the performance of MTR. 6.1. Deploying and running the application Try these suggestions first before upgrading hardware. If possible, run MTR against the source code instead of the archives. This eliminates the need to decompile additional JARs and archives. Specify a comma-separated list of the packages to be evaluated by MTR using the --packages argument on the <MTR_HOME>/bin/mtr-cli command line. If you omit this argument, MTR will decompile everything, which has a big impact on performance. Specify the --excludeTags argument where possible to exclude them from processing. Avoid decompiling and analyzing any unnecessary packages and files, such as proprietary packages or included dependencies. Increase your ulimit when analyzing large applications. See this Red Hat Knowledgebase article for instructions on how to do this for Red Hat Enterprise Linux. If you have access to a server that has better resources than your laptop or desktop machine, you may want to consider running MTR on that server. 6.2. Upgrading hardware If the application and command-line suggestions above do not improve performance, you may need to upgrade your hardware. If you have access to a server that has better resources than your laptop/desktop, then you may want to consider running MTR on that server. Very large applications that require decompilation have large memory requirements. 8 GB RAM is recommended. This allows 3 - 4 GB RAM for use by the JVM. An upgrade from a single or dual-core to a quad-core CPU processor provides better performance. Disk space and fragmentation can impact performance. A fast disk, especially a solid-state drive (SSD), with greater than 4 GB of defragmented disk space should improve performance. 6.3. Configuring MTR to exclude packages and files 6.3.1. Excluding packages You can exclude packages during decompilation and analysis to increase performance. References to these packages remain in the application's source code but excluding them avoids the decompilation and analysis of proprietary classes. Any packages that match the defined value are excluded. For example, you can use com.acme to exclude both com.acme.example and com.acme.roadrunner . You can exclude packages by either of the following methods: Using the --excludePackages argument. Specifying the packages in a file contained within one of the ignored locations. Each package should be included on a separate line, and the file must end in .package-ignore.txt . For example, see <MTR_HOME>/ignore/proprietary.package-ignore.txt . 6.3.2. Excluding files MTR can exclude specific files, such as included libraries or dependencies, during scanning and report generation. Excluded files are defined in a file with the .mtr-ignore.txt or .windup-ignore.txt extension within one of the ignored locations. These files contain a regex string detailing the name to exclude, with one file listed per line. For example, you can exclude the library ant.jar and any Java source files beginning with Example with a file containing the following: 6.3.3. Searching locations for exclusion MTR searches the following locations: ~/.mtr/ignore/ ~/.windup/ignore/ <MTR_HOME>/ignore/ Any files and folders specified by the --userIgnorePath argument Each of these files must conform to the rules specified for excluding packages or files, depending on the type of content to be excluded. | [
".*ant.jar .*Example.*\\.java"
]
| https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/cli_guide/optimize-performance_cli-guide |
Chapter 7. GitOps CLI for use with Red Hat OpenShift GitOps | Chapter 7. GitOps CLI for use with Red Hat OpenShift GitOps The GitOps argocd CLI is a tool for configuring and managing Red Hat OpenShift GitOps and Argo CD resources from a terminal. With the GitOps CLI, you can make GitOps computing tasks simple and concise. You can install this CLI tool on different platforms. 7.1. Installing the GitOps CLI See Installing the GitOps CLI . 7.2. Additional resources What is GitOps? | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/cli_tools/gitops-argocd-cli-tools |
Providing feedback on JBoss EAP documentation | Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_configure_server_security/proc_providing-feedback-on-red-hat-documentation_default |
Part III. Advanced Camel Programming | Part III. Advanced Camel Programming This guide describes how to use the Apache Camel API. | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/fusemrprog |
7.158. perl | 7.158. perl 7.158.1. RHBA-2015:1266 - perl bug fix update Updated perl packages that fix several bugs are now available for Red Hat Enterprise Linux 6. Perl is a high-level programming language that is commonly used for system administration utilities and web programming. Bug Fixes BZ# 1104827 Due to creating threads after tying a variable to an SDBM database using the SDBM_File Perl module, the Perl interpreter terminated unexpectedly when terminating Perl threads. With this update, the DB_File, GDBM_File, NDBM_File, ODBM_File, and SDBM_File Perl modules have been modified to destroy their objects only from the thread context which created the objects. As a result, the destructors of the aforementioned file objects are now thread-safe. Note, however, that other operations on the objects cannot be called from other threads. In general, the DB_File, GDBM_File, NDBM_File, ODBM_File, and SDBM_File Perl modules remain thread-unsafe. BZ# 1086215 Previously, using the Module::Pluggable Perl module to locate plug-ins in a single-letter-named package did not work correctly. As a consequence, existing single-letter-named packages were not found. An upstream patch has been applied, and single-letter-named plug-ins are now located by Module::Pluggable correctly. BZ# 1161170 Previously, the perl-suidperl package consumed the libperl.so library from the perl-libs subpackage with no explicit package-version requirement. This could cause problems, for example, during upgrading. With this update, an explicit dependency on the same version of perl-libs has been added to perl-suidperl, which avoids accidental mixing of incompatible perl-suidperl and perl-libs packages on a system. BZ# 1025906 The Perl Locale::Maketext localization framework did not properly translate the backslash (\) characters. As a consequence, Perl rendered the backslashes as double (\\). With this update, Perl no longer escapes the backslashes in literal output strings, and they appear correctly. BZ# 1184194 Prior to this update, the Archive::Tar Perl module unpacked PAX headers into artificial PaxHeader subdirectories, which caused the extracted tree to be different from the archived tree. Consequently, installation of a Comprehensive Perl Archive Network (CPAN) distribution by the cpan client failed. This bug has been fixed, and it is now possible to install CPAN distributions archived with extended attributes. BZ# 1189041 Previously, when the SHA::Digest method was called on the corresponding class, Perl terminated unexpectedly with a segmentation fault. An upstream patch has been applied, and calling any SHA::Digest method on a class yields a proper exception instead of Perl crash. BZ# 1201191 Previously, due to earlier problems with threads, several tests were blocked for IBM S/390, IBM System z, or PowerPC platforms in the Perl specification file. Consequently, when building the perl package, internal tests were not performed on these platforms, even though the original problems no longer occurred. Now, when building the perl package, the tests are performed on all supported architectures. Users of perl are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-perl |
10.3.3. Establishing a Mobile Broadband Connection | 10.3.3. Establishing a Mobile Broadband Connection You can use NetworkManager 's mobile broadband connection abilities to connect to the following 2G and 3G services: 2G - GPRS ( General Packet Radio Service ) or EDGE ( Enhanced Data Rates for GSM Evolution ) 3G - UMTS ( Universal Mobile Telecommunications System ) or HSPA ( High Speed Packet Access ) Your computer must have a mobile broadband device (modem), which the system has discovered and recognized, in order to create the connection. Such a device may be built into your computer (as is the case on many notebooks and netbooks), or may be provided separately as internal or external hardware. Examples include PC card, USB Modem or Dongle, mobile or cellular telephone capable of acting as a modem. Procedure 10.3. Adding a New Mobile Broadband Connection You can configure a mobile broadband connection by opening the Network Connections window, clicking Add , and selecting Mobile Broadband from the list. Right-click on the NetworkManager applet icon in the Notification Area and click Edit Connections . The Network Connections window appears. Click the Add button to open the selection list. Select Mobile Broadband and then click Create . The Set up a Mobile Broadband Connection assistant appears. Under Create a connection for this mobile broadband device , choose the 2G- or 3G-capable device you want to use with the connection. If the dropdown menu is inactive, this indicates that the system was unable to detect a device capable of mobile broadband. In this case, click Cancel , ensure that you do have a mobile broadband-capable device attached and recognized by the computer and then retry this procedure. Click the Forward button. Select the country where your service provider is located from the list and click the Forward button. Select your provider from the list or enter it manually. Click the Forward button. Select your payment plan from the dropdown menu and confirm the Access Point Name ( APN ) is correct. Click the Forward button. Review and confirm the settings and then click the Apply button. Edit the mobile broadband-specific settings by referring to the Configuring the Mobile Broadband Tab description below . Procedure 10.4. Editing an Existing Mobile Broadband Connection Follow these steps to edit an existing mobile broadband connection. Right-click on the NetworkManager applet icon in the Notification Area and click Edit Connections . The Network Connections window appears. Select the connection you want to edit and click the Edit button. Select the Mobile Broadband tab. Configure the connection name, auto-connect behavior, and availability settings. Three settings in the Editing dialog are common to all connection types: Connection name - Enter a descriptive name for your network connection. This name will be used to list this connection in the Mobile Broadband section of the Network Connections window. Connect automatically - Check this box if you want NetworkManager to auto-connect to this connection when it is available. See Section 10.2.3, "Connecting to a Network Automatically" for more information. Available to all users - Check this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 10.2.4, "User and System Connections" for details. Edit the mobile broadband-specific settings by referring to the Configuring the Mobile Broadband Tab description below . Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your mobile broadband connection, click the Apply button and NetworkManager will immediately save your customized configuration. Given a correct configuration, you can connect to your new or customized connection by selecting it from the NetworkManager Notification Area applet. See Section 10.2.1, "Connecting to a Network" for information on using your new or altered connection. You can further configure an existing connection by selecting it in the Network Connections window and clicking Edit to return to the Editing dialog. Then, to configure: Point-to-point settings for the connection, click the PPP Settings tab and proceed to Section 10.3.9.3, "Configuring PPP (Point-to-Point) Settings" ; IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 10.3.9.4, "Configuring IPv4 Settings" ; or, IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 10.3.9.5, "Configuring IPv6 Settings" . Configuring the Mobile Broadband Tab If you have already added a new mobile broadband connection using the assistant (see Procedure 10.3, "Adding a New Mobile Broadband Connection" for instructions), you can edit the Mobile Broadband tab to disable roaming if home network is not available, assign a network ID, or instruct NetworkManager to prefer a certain technology (such as 3G or 2G) when using the connection. Number The number that is dialed to establish a PPP connection with the GSM-based mobile broadband network. This field may be automatically populated during the initial installation of the broadband device. You can usually leave this field blank and enter the APN instead. Username Enter the user name used to authenticate with the network. Some providers do not provide a user name, or accept any user name when connecting to the network. Password Enter the password used to authenticate with the network. Some providers do not provide a password, or accept any password. APN Enter the Access Point Name ( APN ) used to establish a connection with the GSM-based network. Entering the correct APN for a connection is important because it often determines: how the user is billed for their network usage; and/or whether the user has access to the Internet, an intranet, or a subnetwork. Network ID Entering a Network ID causes NetworkManager to force the device to register only to a specific network. This can be used to ensure the connection does not roam when it is not possible to control roaming directly. Type Any - The default value of Any leaves the modem to select the fastest network. 3G (UMTS/HSPA) - Force the connection to use only 3G network technologies. 2G (GPRS/EDGE) - Force the connection to use only 2G network technologies. Prefer 3G (UMTS/HSPA) - First attempt to connect using a 3G technology such as HSPA or UMTS, and fall back to GPRS or EDGE only upon failure. Prefer 2G (GPRS/EDGE) - First attempt to connect using a 2G technology such as GPRS or EDGE, and fall back to HSPA or UMTS only upon failure. Allow roaming if home network is not available Uncheck this box if you want NetworkManager to terminate the connection rather than transition from the home network to a roaming one, thereby avoiding possible roaming charges. If the box is checked, NetworkManager will attempt to maintain a good connection by transitioning from the home network to a roaming one, and vice versa. PIN If your device's SIM ( Subscriber Identity Module ) is locked with a PIN ( Personal Identification Number ), enter the PIN so that NetworkManager can unlock the device. NetworkManager must unlock the SIM if a PIN is required in order to use the device for any purpose. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Establishing_a_Mobile_Broadband_Connection |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.