title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
1.2. As Part of Planning Single Sign-On
1.2. As Part of Planning Single Sign-On The thing about authentication as described in Section 1.1, "Confirming User Identities" is that every secure application requires at least a password to access it. Without a central identity store and every application maintaining its own set of users and credentials, a user has to enter a password for every single service or application he opens. This can require entering a password several times a day, maybe even every few minutes. Maintaining multiple passwords, and constantly being prompted to enter them, is a hassle for users and administrators. Single sign-on is a configuration which allows administrators to create a single password store so that users can log in once, using a single password, and be authenticated to all network resources. Red Hat Enterprise Linux supports single sign-on for several resources, including logging into workstations, unlocking screen savers, and accessing secured web pages using Mozilla Firefox. With other available system services such as PAM, NSS, and Kerberos, other system applications can be configured to use those identity sources. Single sign-on is both a convenience to users and another layer of security for the server and the network. Single sign-on hinges on secure and effective authentication. Red Hat Enterprise Linux provides two authentication mechanisms which can be used to enable single sign-on: Kerberos-based authentication, through both Kerberos realms and Active Directory domains Smart card-based authentication Both of these methods create a centralized identity store (either through a Kerberos realm or a certificate authority in a public key infrastructure), and the local system services then use those identity domains rather than maintaining multiple local stores.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/sso
Template APIs
Template APIs OpenShift Container Platform 4.16 Reference guide for template APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/template_apis/index
7.2. Moving Resources Due to Failure
7.2. Moving Resources Due to Failure When you create a resource, you can configure the resource so that it will move to a new node after a defined number of failures by setting the migration-threshold option for that resource. Once the threshold has been reached, this node will no longer be allowed to run the failed resource until: The administrator manually resets the resource's failcount using the pcs resource failcount command. The resource's failure-timeout value is reached. There is no threshold defined by default. Note Setting a migration-threshold for a resource is not the same as configuring a resource for migration, in which the resource moves to another location without loss of state. The following example adds a migration threshold of 10 to the resource named dummy_resource , which indicates that the resource will move to a new node after 10 failures. You can add a migration threshold to the defaults for the whole cluster with the following command. To determine the resource's current failure status and limits, use the pcs resource failcount command. There are two exceptions to the migration threshold concept; they occur when a resource either fails to start or fails to stop. If the cluster property start-failure-is-fatal is set to true (which is the default), start failures cause the failcount to be set to INFINITY and thus always cause the resource to move immediately. Stop failures are slightly different and crucial. If a resource fails to stop and STONITH is enabled, then the cluster will fence the node in order to be able to start the resource elsewhere. If STONITH is not enabled, then the cluster has no way to continue and will not try to start the resource elsewhere, but will try to stop it again after the failure timeout.
[ "pcs resource meta dummy_resource migration-threshold=10", "pcs resource defaults migration-threshold=10" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-failure_migration-haar
Chapter 12. Kernel
Chapter 12. Kernel The protobuf-c packages are now available for the little-endian variant of IBM Power Systems architecture This update adds the protobuf-c packages for the little-endian variant of IBM Power Systems architecture. The protobuf-c packages provide C bindings for Google's Protocol Buffer and are a prerequisite for the criu packages on the above mentioned architecture. The criu packages provide the Checkpoint/Restore in User space (CRIU) function, which provides the possibility to checkpoint and restore processes or groups of processes. (BZ#1289666) The CAN protocol has been enabled in the kernel The Controller Area Network (CAN) protocol kernel modules have been enabled, providing the device interface for CAN device drivers. CAN is a vehicle bus specification originally intended to connect the various micro-controllers in automobiles and has since extended to other areas. CAN is also used in industrial and machine controls where a high performance interface is required and other interfaces such as RS-485 are not sufficient. The functions exported from the CAN protocol modules are used by CAN device drivers to make the kernel aware of the devices and to allow applications to connect and transfer data. Enablement of CAN in the kernel allows the use of third party CAN drivers and applications to implement CAN based systems. (BZ#1311631) Persistent memory support added to kexec-tools The Linux kernel now supports E820_PRAM and E820_PMEM type for the Non-Volatile Dual In-line Memory Module (NVDIMM) memory devices. A patch has been backported from the upstream, which ensures that kexec-tools support these memory devices as well. (BZ#1282554) libndctl - userspace nvdimm management library The libndctl userspace library has been added. It is a collection of C interfaces to the ioctl and sysfs entry points provided by the kernel libnvdimm subsystem. The library enables higher level management software for NVDIMM-enabled platforms and also provides a command-line interface for managing NVDIMMs. (BZ#1271425) New symbols for the kABI whitelist to support the hpvsa and hpdsa drivers This update adds a set of symbols to the kernel Application Binary Interface (kABI) whitelist, which ensures the support for the hpvsa and hpdsa drivers. The newly added symbols are: scsi_add_device scsi_adjust_queue_depth scsi_cmd_get_serial scsi_dma_map scsi_dma_unmap scsi_scan_host (BZ#1274471) crash rebased to version 7.1.5 The crash packages have been upgraded to upstream version 7.1.5, which provides several bug fixes and a number of enhancements over the version. Notably, this rebase adds new options such as dis -s , dis -f , sys -i , list -l , new support for Quick Emulator (QEMU) generated Executable and Linkable Format (ELF) vmcores on the 64-bit ARM architectures, and several updates required for support of recent upstream kernels. It is safer and more efficient to rebase the crash packages than to backport selectively the individual patches. (BZ# 1292566 ) New package: crash-ptdump-command Crash-ptdump-command is a new rpm package which provides a crash extension module to add ptdump subcommand to the crash utility. The ptdump subcommand retrieves and decodes the log buffer generated by the Intel Processor Trace facility from the vmcore file and outputs to the files. This new package is designed for EM64T and AMD64 architectures. (BZ#1298172) Ambient capabilities are now supported Capabilities are per-thread attributes used by the Linux kernel to divide the privileges traditionally associated with superuser privileges into multiple distinct units. This update adds support for ambient capabilities to the kernel. Ambient capabilities are a set of capabilities that are preserved when a program is executed using the execve() system call. Only capabilities which are permitted and inheritable can be ambient. You can use the prctl() call to modify ambient capabilities. See the capabilities(7) man page for more information about kernel capabilities in general, and the prctl(2) man page for information about the prctl call. (BZ#1165316) cpuid is now available With this update, the cpuid utility is available in Red Hat Enterprise Linux. This utility dumps detailed information about the CPU(s) gathered from the CPUID instruction, and also determines the exact model of CPU(s). It supports Intel, AMD, and VIA CPUs. (BZ#1307043) FC-FCoE symbols have been added to KABI white lists With this update, a list of symbols belonging to the libfc and libfcoe kernel modules has been added to the kernel Application Binary Interface (KABI) white lists. This ensures that the Fibre Channel over Ethernet (FCoE) driver, which depends on libfc and libfcoe , can safely use the newly added symbols. (BZ#1232050) New package: opal-prd for OpenPower systems The new opal-prd package contains a daemon that handles hardware-specific recovery processes, and should be run as a background system process after boot. It interacts with OPAL firmware to capture hardware error causes, log events to the management processor, and handles recoverable errors where suitable. (BZ#1224121) New package: libcxl The new libcxl package contains the user-space library for applications in user space to access CAPI hardware via kernel cxl functions. It is available on IBM Power Systems and the little-endian variant of IBM Power Systems architecture. (BZ#1305080) Kernel support for the newly added iproute commands This update adds kernel support to ensure the correct functionality of newly added iproute commands. The provided patch set includes: Extension of the IPsec interface, which allows prefixed policies to be hashed. Inclusion of the hash prefixed policies based on preflen thresholds. Configuration of policy hash table thresholds by netlink. (BZ#1222936) Backport of the PID cgroup controller This update adds the new Process Identifier (PID) controller. This controller accounts for the processes per cgroup and allows a cgroup hierarchy to stop any new tasks from being forked or cloned after a certain limit is reached. (BZ#1265339) mpt2sas and mpt3sas merged The source codes of mpt2sas and mpt3sas drivers have been merged. Unlike in upstream, Red Hat Enterprise Linux 7 continues to maintain two binary drivers for compatibility reasons. (BZ#1262031) Allow multiple .ko files to be specified in ksc Previously, it was not possible to add multiple .ko files in a single run of the ksc utility. Consequently, the drivers that contain multiple kernel modules were not passed to ksc in a single run. With this update, the -k option can be specified multiple times in the same run. Thus single run of ksc can be used to query symbols used by several kernel modules. As a result, one file with symbols used by all modules is generated. (BZ#906659) dracut update The dracut initramfs generator has been updated with a number of bug fixes and enhancements over the version. Notably: dracut gained a new kernel command-line option rd.emergency=[reboot|poweroff|halt] , which specifies what action to execute in case of a critical failure. When using rd.emergency=[reboot|poweroff|halt] , the rd.shell=0 option should also be specified. The reboot , poweroff , and halt commands now work in the emergency shell of dracut . dracut now supports multiple bond, bridge, and VLAN configurations on the kernel command line. The device timeout can now be specified on the kernel command line using the rd.device.timeout=<seconds> option. DNS name servers specified on the kernel command line are now used in DHCP. dracut now supports 20-byte MAC addresses. Maximum Transmission Unit (MTU) and MAC addresses are now set correctly for DHCP and IPv6 Stateless Address AutoConfiguration (SLAAC). The ip= kernel command line option now supports MAC addresses in brackets. dracut now supports the NFS over RDMA (NFSoRDMA) module. Support for kdump has been added to Fibre Channel over Ethernet (FCoE) devices. The configuration of FCoE devices is compiled in kdump initramfs . Kernel crash dumps can now be saved to FCoE devices. dracut now supports the --install-optional <file list> option and the install_optional_items+= <file>[ <file> ...] configuration file directive. If you use the new option or directive, the files are installed if they exist, and no error is returned if they do not exist. dracut DHCP now recognizes the rfc3442-classless-static-routes option, which enables using classless network addresses. (BZ# 1359144 , BZ# 1178497 , BZ# 1324454 , BZ# 1194604 , BZ# 1282679 , BZ# 1282680 , BZ# 1332412 , BZ#1319270, BZ# 1271656 , BZ# 1271656 , BZ# 1367374 , BZ#1169672, BZ# 1222529 , BZ#1260955) Support for Wacom Cintiq 27 QHD The Wacom Cintiq 27 QHD tablets are now supported in Red Hat Enterprise Linux 7. (BZ#1342989) Full support for Intel(R) Omni-Path Architecture (OPA) kernel driver Intel(R) Omni-Path Architecture (OPA) kernel driver, previously available as a Technology Preview, is now fully supported. Intel(R) OPA provides Host Fabric Interconnect (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment. For instructions on how to obtain Intel(R) Omni-Path Architecture documentation, see https://access.redhat.com/articles/2039623 . (BZ#1374826) Cyclitest --smi option available for non-root users With this update, it is possible to use the cyclictest program with the --smi option as a non-root user, provided that the user also belongs to the realtime group. On processors that support system management interrupts (SMIs), --smi displays a report on the system's SMIs, which was previously only available for root users. (BZ# 1346771 ) Support added for the new Smart Array storage adapters In Red Hat Enterprise Linux 7.2 and older versions, the new Smart Array storage adapters were not officially supported. However, these adapters were detected by the aacraid driver and the system appeared to work correctly. With this update, the new Smart Array storage adapters are properly supported by the new smartpqi driver. Note that when you update, the driver name for these adapters will change. (BZ#1273115) The Linux kernel now supports trusted virtual function (VF) concept The upstream code has been backported into the Linux kernel to provide support for trusted virtual function (VF) concept. As a result, the trusted VFs are now permitted to enable multicast promiscuous mode which allows them to have more than 30 IPv6 addresses assigned. The trusted VFs are also permitted to overwrite media access control (MAC) addresses. (BZ#1302101) Seccomp mode 2 is now supported on IBM Power Systems This update adds support for seccomp mode 2 on IBM Power Systems. Seccomp mode 2 involves the parsing of Berkeley Packet Filter (BPF) configuration files to define system call filtering. This mode provides notable security enhancements, which are essential for the adoption of containers in Linux on IBM Power Systems. (BZ#1186835) Memory Bandwidth Monitoring has been added This update adds Memory Bandwidth Monitoring (MBM) into the Linux kernel. MBM is a CPU feature included in the family of platform quality of service (QoS) feature that is used to track memory bandwidth usage for a specific task, or group of tasks, associated with an Resource Monitoring ID (RMID). (BZ#1084618) brcmfmac now supports Broadcom wireless cards The brcmfmac kernel driver has been updated to support Broadcom BCM4350 and BCM43602 wireless cards. (BZ#1298446) The autojoin option has been added to the ip addr command to allow multicast group join or leave Previously, there was no method to indicate Internet Group Management Protocol (IGMP) membership to Ethernet switches that do multicast pruning. Consequently, those switches did not replicate packets to the host's port. With this update, the ip addr command has been extended with the autojoin option, which enables a host to join or leave a multicast group. (BZ#1267398) Open vSwitch now supports NAT This update adds Network Address Translation (NAT) support to the Open vSwitch kernel module. (BZ#1297465) The page tables are now initialized in parallel Previously, the page tables were initiallized serially on Non-Uniform Memory Access (NUMA) systems, based on Intel EM64T, Intel 64, and AMD64 architectures. Consequently, large servers could perform slowly at boot time. With this update, a set of patches has been backported to ensure that memory initialization is mostly done in parallel by node-local CPUs as a part of node activation. As a result, systems with the memory of 16TB to 32TB now boot about two times faster compared to the version. (BZ#727269) The Linux kernel now supports Intel MPX This update adds the support of Intel Memory Protection Extensions (MPX) into the Linux kernel. Intel MPX is a set of extensions to the Intel 64 architectures. Intel MPX together with a compiler, runtime library and operating system support increase the robustness and security of software by checking pointer references whose compile-time normal intentions can be maliciously exploited due to buffer overflows. (BZ#1138650) ftrace now prints command names as expected When the trylock() function did not successfully acquire a lock, saving a command name in the ftrace kernel tracer failed. As a consequence, ftrace did not properly print command names in the /sys/kernel/debug/tracing file. With this update, recording of the command names has been fixed, and ftrace now prints command names as expected. Users are also now able to set the number of stored commands by setting the saved_cmdlines_size kernel configuration parameter. (BZ#1117093) The shared memory that was swapped out is now visible in /proc/<pid>/smaps Prior to this update, swapped-out shared memory appeared neither in the /proc/<pid>/status file, nor in the /proc/<pid>/smaps file. This update adds per-process accounting of swapped-out shared memory, including sysV shm , shared anonymous mapping and mapping to a tmpfs file. Swapped-out shared memory now appears in /proc/<pid>/smaps . However, swapped-out shared memory is not reflected in /proc/<pid>/status , and swapped-out shmem pages therefore remain invisible in certain tools such as procps . (BZ#838926) Kernel UEFI support update The Unified Extensible Firmware Interface (UEFI) support in the kernel has been updated with a set of selected patches from the upstream kernel. This set provides a number of bug fixes and enhancements over the version. (BZ#1310154) Mouse controller now works on guests with Secure Boot Red Hat Enterprise Linux now supports a mouse controller on guest virtual machines that have the Secure Boot feature enabled. This ensures mouse functionality on Red Hat Enterprise Linux guests running on hypervisors that enable secure boot by default. (BZ#1331578) The RealTek RTS520 card reader is now supported This update adds support for the RealTek RTS520 card reader. (BZ#1280133) Tunnel devices now support lockless xmit Previously, tunnel devices, which used the pfifo_fast queue discipline by default, required the serialization lock for the tx path. With this update, per-CPU variables are used for statistic accounting, and a serialization lock on the tx path is not required. As a result, the user space is now allowed to configure a noqueue queue discipline with no lock required on the xmit path, which significantly improves tunnel device xmit performance. (BZ#1328874) Update of Chelsio drivers Chelsio NIC, iWARP, vNIC and iSCSI drivers have been updated to their most recent versions, which add several bug fixes and enhancements over the versions. The most notable enhancements include: ethtool support to get adapter statistics ethtool support to dump channel statistics ethtool to dump loopback port statistics debugfs entry to dump CIM MA logic analyzer logs debugfs entry to dump CIM PIF logic analyzer contents debugfs entry to dump channel rate debugfs entry to enable backdoor access debugfs support to dump meminfo MPS tracing support hardware time stamp support for RX device IDs for T6 adapters (BZ#1275829) Support for 25G, 50G and 100G speed modes for Chelsio drivers With this update, a set of patches has been backported into the Linux kernel that add definitions for 25G, 50G and 100G speed modes for Chelsio drivers. This patch set also adds the link mode mask API to the cxgb4 and cxgb4vf drivers. (BZ#1365689) mlx5 now supports NFSoRDMA With this update, the mlx5 driver supports export of Network File System over Remote Direct Memory Access (NFSoRDMA). As a result, customers can now mount NFS shares over RDMA and perform the following actions from the client computer: list files on the NFS share using the ls command use the touch command on new files This feature allows some jobs to run from a shared storage, which is useful when you have large, InfiniBand-connected grids running that keep growing in size. (BZ#1262728) I2C has been enabled on 6th Generation Intel Core Processors Starting from this update, the I2C devices that are controlled by a kernel driver are supported on 6th Generation Intel Core Processors. (BZ#1331018) mlx4 and mlx5 now support RoCE This update adds the support of Remote Direct Memory Access Over Converged Ethernet (RoCE) network protocol timespanning to the mlx4 and mlx5 drivers. RoCE is a mechanism to provide efficient server-to-server data transfer through Remote Direct Memory Access (RDMA) with very low latencies on lossless Ethernet networks. RoCE encapsulates InfiniBand (IB) transport in one of two Ethernet packets: - RoCEv1 - dedicated ether type (0x8915) - RoCEv2 - User Datagram Protocol (UDP) and dedicated UDP port (4791). Both RoCE versions are now supported for mlx4 and mlx5 . Starting from this update, mlx4 supports RoCE Virtual function Link Aggregation protocol, which provides failover and link aggregation capabilities to mlx4 device physical ports. Only IB port that represents the two physical ports is exposed to the application layer. (BZ#1275423, BZ#1275187, BZ#1275209) (BZ#1275423) Support of cross-channel synchronization Starting from this update, the Linux kernel supports cross-channel synchronization on AMD64 and Intel 64, IBM Power Systems and 64-bit ARM architectures. Devices now have capability to synchronize or serialize execution of I/O operations on different work queues without any intervention from the host software. (BZ#1275711) Support for SGI UV4 has been added into the Linux kernel Starting from this update, the Linux kernel supports the SGI UV4 platform. (BZ#1276458) Updated support of TPM 2.0. Support of Trusted Platform Module (TPM) of the version 2.0 has been updated in the Linux kernel. (BZ#1273499) Support of 12 TB of RAM With this update, the kernel is certified to support 12 TB of RAM. This new feature covers the advance in memory technology and it provides the potential to meet technological requirements of future servers that will be released in the life time of Red Hat Enterprise Linux 7. This feature is available for AMD64 and Intel 64 architectures. (BZ#797488) Full support for 10GbE RoCE Express feature for RDMA With Red Hat Enterprise Linux 7.3, the 10GbE RDMA over Converged Ethernet (RoCE) Express feature becomes fully supported. This makes it possible to use Ethernet and Remote Direct Memory Access (RDMA), as well as the Direct Access Programming Library (DAPL) and OpenFabrics Enterprise Distribution (OFED) APIs, on IBM z Systems. Before using this feature on an IBM z13 system, ensure that the minimum required service is applied: z/VM APAR UM34525 and HW ycode N98778.057 (bundle 14). (BZ#1289933) zEDC compression fully supported on IBM z Systems Red Hat Enterprise Linux 7.3 and later provide full support for the Generic Workqueue (GenWQE) engine device driver. The initial task of the driver is to perform zlib-style compression and decompression of the RFC1950, RFC1951 and RFC1952 formats, but it can be adjusted to accelerate a variety of other tasks. (BZ#1289929) LPAR Watchdog for IBM z Systems The enhanced watchdog driver for IBM z Systems has become fully supported. This driver supports Linux logical partitions (LPAR), as well as Linux guests in the z/VM hypervisor, and provides automatic reboot and automatic dump capabilities if a Linux system becomes unresponsive. (BZ#1278794)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/new_features_kernel
Chapter 2. Configuring Red Hat build of Keycloak for production
Chapter 2. Configuring Red Hat build of Keycloak for production A Red Hat build of Keycloak production environment provides secure authentication and authorization for deployments that range from on-premise deployments that support a few thousand users to deployments that serve millions of users. This chapter describes the general areas of configuration required for a production ready Red Hat build of Keycloak environment. This information focuses on the general concepts instead of the actual implementation, which depends on your environment. The key aspects covered in this chapter apply to all environments, whether it is containerized, on-premise, GitOps, or Ansible. 2.1. TLS for secure communication Red Hat build of Keycloak continually exchanges sensitive data, which means that all communication to and from Red Hat build of Keycloak requires a secure communication channel. To prevent several attack vectors, you enable HTTP over TLS, or HTTPS, for that channel. To configure secure communication channels for Red Hat build of Keycloak, see Configuring TLS and Configuring outgoing HTTP requests . 2.2. The hostname for Red Hat build of Keycloak In a production environment, Red Hat build of Keycloak instances usually run in a private network, but Red Hat build of Keycloak needs to expose certain public facing endpoints to communicate with the applications to be secured. For details on the endpoint categories and instructions on how to configure the public hostname for them, see Configuring the hostname . 2.3. Reverse proxy in a distributed environment Apart from Configuring the hostname , production environments usually include a reverse proxy / load balancer component. It separates and unifies access to the network used by your company or organization. For a Red Hat build of Keycloak production environment, this component is recommended. For details on configuring proxy communication modes in Red Hat build of Keycloak, see Using a reverse proxy . That chapter also recommends which paths should be hidden from public access and which paths should be exposed so that Red Hat build of Keycloak can secure your applications. 2.4. Production grade database The database used by Red Hat build of Keycloak is crucial for the overall performance, availability, reliability and integrity of Red Hat build of Keycloak. For details on how to configure a supported database, see Configuring the database . 2.5. Support for Red Hat build of Keycloak in a cluster To ensure that users can continue to log in when a Red Hat build of Keycloak instance goes down, a typical production environment contains two or more Red Hat build of Keycloak instances. Red Hat build of Keycloak runs on top of JGroups and Infinispan, which provide a reliable, high-availability stack for a clustered scenario. When deployed to a cluster, the embedded Infinispan server communication should be secured. You secure this communication either by enabling authentication and encryption or by isolating the network used for cluster communication. To find out more about using multiple nodes, the different caches and an appropriate stack for your environment, see Configuring distributed caches . 2.6. Configure Red Hat build of Keycloak Server with IPv4 or IPv6 The system properties java.net.preferIPv4Stack and java.net.preferIPv6Addresses are used to configure the JVM for use with IPv4 or IPv6 addresses. By default, Red Hat build of Keycloak is accessible via IPv4 and IPv6 addresses at the same time. In order to run only with IPv4 addresses, you need to specify the property java.net.preferIPv4Stack=true . The latter ensures that any hostname to IP address conversions always return IPv4 address variants. These system properties are conveniently set by the JAVA_OPTS_APPEND environment variable. For example, to change the IP stack preference to IPv4, set an environment variable as follows: export JAVA_OPTS_APPEND="-Djava.net.preferIPv4Stack=true"
[ "export JAVA_OPTS_APPEND=\"-Djava.net.preferIPv4Stack=true\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/configuration-production-
Node APIs
Node APIs OpenShift Container Platform 4.17 Reference guide for node APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/node_apis/index
Chapter 5. Creating Multus networks [Technology Preview]
Chapter 5. Creating Multus networks [Technology Preview] OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. You can configure your default pod network during cluster installation. The default network handles all ordinary network traffic for the cluster. Important Multus support is a Technology Preview feature that is supported only on bare metal and VMWare deployments. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your pods. To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition (NAD) custom resource (CR). A CNI configuration inside each of the NetworkAttachmentDefinition defines how that interface is created. OpenShift Data Foundation uses the CNI plug-in called macvlan. Creating a macvlan-based additional network allows pods on a host to communicate with other hosts and pods on those hosts using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. 5.1. Creating network attachment definitions To utilize Multus, an already working cluster with the correct networking configuration is required, see Recommended network configuration and requirements for a Multus configuration . The newly created NetworkAttachmentDefinition (NAD) can be selected during the Storage Cluster installation. This is the reason they must be created before the Storage Cluster. You can select the newly created NetworkAttachmentDefinition (NAD) during the Storage Cluster installation. This is the reason you must create the NAD before you create the Storage Cluster. As detailed in the Planning Guide, the Multus networks you create depend on the number of available network interfaces you have for OpenShift Data Foundation traffic. It is possible to separate all of the storage traffic onto one of the two interfaces (one interface used for default OpenShift SDN) or to further segregate storage traffic into client storage traffic (public) and storage replication traffic (private or cluster). The following is an example NetworkAttachmentDefinition for all the storage traffic, public and cluster, on the same interface. It requires one additional interface on all schedulable nodes (OpenShift default SDN on separate network interface): Note All network interface names must be the same on all the nodes attached to the Multus network (that is, ens2 for ocs-public-cluster ). The following is an example NetworkAttachmentDefinition for storage traffic on separate Multus networks, public, for client storage traffic, and cluster, for replication traffic. It requires two additional interfaces on OpenShift nodes hosting object storage device (OSD) pods and one additional interface on all other schedulable nodes (OpenShift default SDN on separate network interface): Example NetworkAttachmentDefinition : Note All network interface names must be the same on all the nodes attached to the Multus networks (that is, ens2 for ocs-public , and ens3 for ocs-cluster ).
[ "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens3\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.2.0/24\" } }'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_and_allocating_storage_resources/creating-multus-networks_rhodf
Chapter 54. Authentication and Interoperability
Chapter 54. Authentication and Interoperability sudo unexpectedly denies access when performing group lookups This problem occurs on systems that meet all of these conditions: A group name is configured in a sudoers rule available through multiple Name Service Switch (NSS) sources, such as files or sss . The NSS priority is set to local group definitions. This is true when the /etc/nsswitch.conf file includes the following line: The sudo Defaults option named match_group_by_gid is set to true . This is the default value for the option. Because of the NSS source priority, when the sudo utility tries to look up the GID of the specified group, sudo receives a result that describes only the local group definition. Therefore, if the user is a member of the remote group, but not the local group, the sudoers rule does not match, and sudo denies access. To work around this problem, choose one of the following: Explicitly disable the match_group_by_gid Defaults for sudoers . Open the /etc/sudoers file, and add this line: Configure NSS to prioritize the sss NSS source over files . Open the /etc/nsswitch.conf file, and make sure it lists sss before files : This ensures that sudo permits access to users that belong to the remote group. (BZ#1293306) The KCM credential cache is not suitable for a large number of credentials in a single credential cache If the credential cache contains too many credentials, Kerberos operations, such as klist , fail due to a hardcoded limit on the buffer used to transfer data between the sssd-kcm component and the sssd-secrets component. To work around this problem, add the ccache_storage = memory option in the [kcm] section of the /etc/sssd/sssd.conf file. This instructs the kcm responder to only store the credential caches in-memory, not persistently. Note that if you do this, restarting the system or sssd-kcm clears the credential caches. (BZ# 1448094 ) The sssd-secrets component crashes when it is under load When the sssd-secrets component receives many requests, the situation triggers a bug in the Network Security Services (NSS) library that causes sssd-secrets to terminate unexpectedly. However, the systemd service restarts sssd-secrets for the request, which means that the denial of service is only temporary. (BZ# 1460689 ) SSSD does not correctly handle multiple certificate matching rules with the same priority If a given certificate matches multiple certificate matching rules with the same priority, the System Security Services Daemon (SSSD) uses only one of the rules. As a workaround, use a single certificate matching rule whose LDAP filter consists of the filters of the individual rules concatenated with the | (or) operator. For examples of certificate matching rules, see the sss-certamp(5) man page. (BZ# 1447945 ) SSSD can look up only unique certificates in ID overrides When multiple ID overrides contain the same certificate, the System Security Services Daemon (SSSD) is unable to resolve queries for the users that match the certificate. An attempt to look up these users does not return any user. Note that looking up users by using their user name or UID works as expected. (BZ# 1446101 ) The ipa-advise command does not fully configure smart card authentication The ipa-advise config-server-for-smart-card-auth and ipa-advise config-client-for-smart-card-auth commands do not fully configure the Identity Management (IdM) server and client for smart card authentication. As a consequence, after running the script that the ipa-advise command generated, smart card authentication fails. To work around the problem, see the manual steps for the individual use case in the Linux Domain Identity, Authentication, and Policy Guide: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Linux_Domain_Identity_Authentication_and_Policy_Guide/smart-cards.html (BZ# 1455946 ) The libwbclient library fails to connect to Samba shares hosted on Red Hat Enterprise Linux 7.4 The interface between Samba and the System Security Services Daemon's (SSSD) Winbind plug-in implementation changed. However, this change is missing in SSSD. As a consequence, systems that use the SSSD libwbclient library instead of the Winbind daemon fail to access shares provided by Samba running on Red Hat Enterprise Linux 7.4. There is no workaround available, and Red Hat recommends to not upgrade to Red Hat Enterprise 7.4 if you are using the libwbclient library without running the Winbind daemon. (BZ# 1462769 ) Certificate System ubsystems experience communication problems with TLS_ECDHE_RSA_* ciphers and certain HSMs When certain HSMs are used while TLS_ECDHE_RSA_* ciphers are enabled, subsystems experience communication problems. The issue occurs in the following scenarios: When a CA has been installed and a second subsystem is being installed and tries to contact the CA as a security domain, thus preventing the installation from succeeding. While performing a certificate enrollment on the CA, when archival is required, the CA encounters the same communication problem with the KRA. This scenario can only occur if the offending ciphers were temporarily disabled for the installation. To work around this problem, keep the TLS_ECDHE_RSA_* ciphers turned off if possible. Note that while the Perfect Forward Secrecy provides added security by using the TLS_ECDHE_RSA_* ciphers, each SSL session takes about three times longer to establish. Also, the default TLS_RSA_* ciphers are adequate for the Certificate System operations. (BZ#1256901)
[ "sudoers: files sss", "Defaults !match_group_by_gid", "sudoers: sss files" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/known_issues_authentication_and_interoperability
Chapter 7. Data Roles
Chapter 7. Data Roles 7.1. Data Roles Data roles, also called entitlements, are sets of permissions defined per VDB that dictate data access (create, read, update, delete). Data roles use a fine-grained permission system that JBoss Data Virtualization will enforce at runtime and provide audit log entries for access violations. Refer to the Administration and Configuration Guide and Development Guide: Server Development for more information about Logging and Custom Logging. Prior to applying data roles, you should consider restricting source system access through the fundamental design of your VDB. Foremost, JBoss Data Virtualization can only access source entries that are represented in imported metadata. You should narrow imported metadata to only what is necessary for use by your VDB. When using Teiid Designer, you may then go further and modify the imported metadata at a granular level to remove specific columns or indicate tables that are not to be updated, etc. If data role validation is enabled and data roles are defined in a VDB, then access permissions will be enforced by the JBoss Data Virtualization Server. The use of data roles may be disabled system wide using the setting for the teiid subsystem policy-decider-module. Data roles also have built-in system functions (see Section 2.4.18, "Security Functions" ) that can be used for row-based and other authorization checks. The hasRole system function will return true if the current user has the given data role. The hasRole function can be used in procedure or view definitions to allow for a more dynamic application of security - which allows for things such as value masking or row level security. Note See the Security Guide for details on using an alternative authorization scheme. Warning Data roles are only checked if present in a VDB. A VDB deployed without data roles can be used by any authenticated user.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/chap-Data_Roles
Chapter 1. Enabling one-way SSL/TLS for management interfaces and applications
Chapter 1. Enabling one-way SSL/TLS for management interfaces and applications SSL/TLS, or transport layer security (TLS), is a certificates-based security protocol that is used to secure the data transfer between two entities communicating over a network. You can enable one-way SSL/TLS both for the JBoss EAP management interfaces and the applications deployed on JBoss EAP. For more information, see the following procedures: Enabling one-way SSL/TLS for management interfaces . Enabling one-way SSL/TLS for applications deployed on JBoss EAP . 1.1. Enabling one-way SSL/TLS for management interfaces Enable one-way SSL/TLS for management interfaces so that the communication between JBoss EAP management interfaces and the clients connecting to the interfaces is secure. To enable one-way SSL/TLS for management interfaces, you can use the following procedures: Enabling one-way SSL/TLS for management interfaces by using the wizard : Use this procedure to quickly set up SSL/TLS using a CLI-based wizard. Elytron creates the required resources for you based on your inputs to the wizard. Enable one-way SSL/TLS for management interfaces by using the subsystem commands : Use this procedure to configure the required resource for enabling SSL/TLS manually. Manually configuring the resources gives you more control over the server configuration. Additionally, you can disable SSL/TLS for management interfaces using the procedure Disabling SSL/TLS for management interfaces by using the security command . 1.1.1. Enabling one-way SSL/TLS for management interfaces by using the wizard Elytron provides a wizard to quickly set up SSL/TLS. You can either use an existing keystore containing certificates or use the keystore and self-signed certificates that the wizard generates to enable SSL/TLS. You can also obtain and use certificates from the Let's Encrypt certificate authority by using the --lets-encrypt option. For information about Let's Encrypt, see the Let's Encrypt documentation . Use the self-signed certificates the wizard generates to enable SSL/TLS for testing and development purposes only. For production environments always use certificate authority (CA)-signed certificates. Important Do not use self-signed certificates in a production environment. Use only the certificates signed by a certificate authority (CA). The wizard configures the following resources that are required to enable SSL/TLS for for management interfaces: key-store key-manager server-ssl-context The server-ssl-context is then applied to http-interface . Elytron names each resource as resource-type-UUID . For example, key-store-9e35a3be-62bb-4fff-afc2-2d8d141b82bc. The universally unique identifier (UUID) helps avoid name collisions for the resources. Prerequisites JBoss EAP is running. Procedure Launch the wizard to configure one-way SSL/TLS for management interfaces by entering the following command in the management CLI. Syntax Enter the required information when prompted. Use the --lets-encrypt option to obtain and use certificates from the Let's Encrypt certificate authority. If SSL/TLS is already enabled for management interfaces the wizard exits with the following message: To change the existing configuration, first disable SSL/TLS for management interfaces and then create a new configuration. For information about disabling SSL/TLS for management interfaces, see Disabling SSL/TLS for management interfaces by using the wizard . Note To enable one-way SSL/TLS, enter n or blank when prompted to enable SSL mutual authentication. Setting mutual authentication enables two-way SSL/TLS. Example of using the wizard interactively Example inputs to the wizard prompts After you enter y , the server reloads. If you configured a self-signed certificate, used the wizard to generate self-signed certificate or configured a certificate that is not trusted by the Java virtual machine (JVM), the management CLI prompts you to accept the certificate that the server presents. Enter T or P to proceed with the connection. You get the following output: Verification Verify SSL/TLS by connecting with the management CLI client. You can test SSL/TLS by placing an Elytron client SSL context in a configuration file and then connecting to the server using the management CLI and referencing the configuration file. Navigate to the directory containing the keystore file. In this example, the keystore file exampleKeystore.pkcs12 was generated in the server's standalone/configuration directory. Example Create a client trust-store with server certificates. Syntax Example If you used a self-signed certificate, you are prompted to trust the certificate. Define the client-side SSL context in a file, for example example-security.xml . Syntax <?xml version="1.0" encoding="UTF-8"?> <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <key-stores> <key-store name="USD{key-store_name}" type="PKCS12" > <file name="USD{path_to_truststore}"/> <key-store-clear-password password="USD{keystore_password}" /> </key-store> </key-stores> <ssl-contexts> <ssl-context name="USD{ssl_context_name}"> <trust-store key-store-name="USD{trust_store_name}" /> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context="USD{ssl_context_name}" /> </ssl-context-rules> </authentication-client> </configuration> Example <?xml version="1.0" encoding="UTF-8"?> <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <key-stores> <key-store name="clientStore" type="PKCS12" > <file name=" JBOSS_HOME /standalone/configuration/client.truststore.pkcs12"/> <key-store-clear-password password="secret" /> </key-store> </key-stores> <ssl-contexts> <ssl-context name="client-SSL-context"> <trust-store key-store-name="clientStore" /> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context="client-SSL-context" /> </ssl-context-rules> </authentication-client> </configuration> Connect to the server and issue a command. Example Expected output Verify SSL/TLS by using a browser. Navigate to https://localhost:9993 . If you used a self-signed certificate, the browser presents a warning that the certificate presented by the server is unknown. Inspect the certificate and verify that the fingerprints shown in your browser match the fingerprints of the certificate in your keystore. You can view the certificate you generated with the following command: Syntax Example You can get the keystore name from the wizard's output, for example, "key-store is key-store-a18ba30e-6a26-4ed6-87c5-feb7f3e4dff1". Example output After you accept the server certificate, you are prompted for login credentials. You can login using user credentials of existing JBoss EAP users. SSL/TLS is now enabled for JBoss EAP management interfaces. Additional resources key-manager attributes key-store attributes server-ssl-context attributes 1.1.2. Enabling one-way SSL/TLS for management interfaces by using the subsystem commands Use the elytron subsystem commands to secure the JBoss EAP management interfaces with SSL/TLS. For testing and development purposes, you can use self-signed certificates. You can either use an existing keystore containing certificates or use the keystore that Elytron generates when you create the key-store resource. For production environments always use certificate authority (CA)-signed certificates. Important Do not use self-signed certificates in a production environment. Use only the certificates signed by a certificate authority (CA). Prerequisites JBoss EAP is running. Procedure Configure a keystore to store certificates. You can either provide a path to an existing keystore, for example, the one that contains CA-signed certificates, or provide a path to the keystore to create. Example If the keystore doesn't contain any certificates, or you used the step above to create the keystore, you must generate a certificate and store the certificate in a file. Generate a key pair in the keystore. Syntax Example Store the certificate in a file. Syntax Example Configure a key-manager referencing the key-store . Syntax Example Important Red Hat did not specify the algorithm attribute because the elytron subsystem uses KeyManagerFactory.getDefaultAlgorithm() to determine an algorithm by default. However, you can specify the algorithm attribute. To specify the algorithm attribute, you need to know what key manager algorithms are provided by the Java Development Kit (JDK) you are using. For example, a JDK that uses Java Secure Socket Extension (SunJSSE) provides the PKIX and SunX509 algorithms. In the command you could specify SunX509 as the key-manager algorithm attribute. Configure a server-ssl-context referencing the key-manager . Syntax Example Important You need to determine what SSL/TLS protocols you want to support. The example command uses TLSv1.2. For TLSv1.2 and earlier, use the cipher-suite-filter argument to specify which cipher suites are allowed. For TLSv1.3, use the cipher-suite-names argument to specify which cipher suites are allowed. TLSv1.3 is disabled by default. If you do not specify a protocol with the protocols attribute or the specified set contains TLSv1.3, configuring cipher-suite-names enables TLSv1.3. Use the use-cipher-suites-order argument to honor server cipher suite order. The use-cipher-suites-order attribute is set to true by default. This differs from the legacy security subsystem behavior, which defaults to honoring client cipher suite order. Update the management interfaces to use the configured server-ssl-context . Syntax Example Reload the server. If you used self-signed certificates for enabling SSL/TLS, the management CLI prompts you to accept the certificate that the server presents. This is the certificate you configured the keystore with. Example output Enter T or P to proceed with the connection. Verification Verify SSL/TLS by connecting through a client. You can test SSL/TLS by placing an Elytron client SSL context in a configuration file and then connecting to the server by using the management CLI referencing the configuration file. Navigate to the directory containing the keystore file. In this example, the keystore file exampleserver.keystore.pkcs12 was generated in the server's standalone/configuration directory. Example Export the server certificate so that it can be imported into a client trust store. Example Create a client trust-store with the server certificates. Syntax Example If you used a self-signed certificate, you are prompted to trust the certificate. Define the client-side SSL context in a file, for example example-security.xml . Syntax <?xml version="1.0" encoding="UTF-8"?> <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <key-stores> <key-store name="USD{key-store_name}" type="PKCS12" > <file name="USD{path_to_truststore}"/> <key-store-clear-password password="USD{keystore_password}" /> </key-store> </key-stores> <ssl-contexts> <ssl-context name="USD{ssl_context_name}"> <trust-store key-store-name="USD{trust_store_name}" /> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context="USD{ssl_context_name}" /> </ssl-context-rules> </authentication-client> </configuration> Example <?xml version="1.0" encoding="UTF-8"?> <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <key-stores> <key-store name="clientStore" type="PKCS12" > <file name=" JBOSS_HOME /standalone/configuration/client.truststore.pkcs12"/> <key-store-clear-password password="secret" /> </key-store> </key-stores> <ssl-contexts> <ssl-context name="client-SSL-context"> <trust-store key-store-name="clientStore" /> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context="client-SSL-context" /> </ssl-context-rules> </authentication-client> </configuration> Connect to the server and issue a command. Example Expected output Verify SSL/TLS by using a browser. Navigate to https://localhost:9993 . If you used a self-signed certificate, the browser presents a warning that the certificate presented by the server is unknown. Inspect the certificate and verify that the fingerprints shown in your browser match the fingerprints of the certificate in your keystore. You can view the certificate you generated with the following command: Syntax Example Example output After you accept the server certificate, you are prompted for login credentials. You can login using user credentials of existing JBoss EAP users. SSL/TLS is now enabled for JBoss EAP management interfaces. Additional resources key-manager attributes key-store attributes server-ssl-context attributes 1.1.3. Disabling SSL/TLS for management interfaces by using the security command Use the security command to disable SSL/TLS for management interfaces. You might want to do this to use a different SSL/TLS configuration to the one that is configured. Disabling SSL/TLS using the command does not delete the Elytron resources. The command just undefines the secure-socket-binding and the ssl-context attributes of the http-interface management-interface resource. Prerequisites JBoss EAP is running. Procedure Use the disable-ssl-management command in the management CLI. The server reloads with the following output: You can enable SSL/TLS for server management interfaces using one of the following methods: Enable one-way SSL/TLS for management interfaces by using the wizard : Use this procedure to quickly set up SSL/TLS using a CLI-based wizard. Elytron creates the required resources for you based on your inputs to the wizard. Enable one-way SSL/TLS for management interfaces by using the subsystem commands : Use this procedure to configure the required resource for enabling SSL/TLS manually. Manually configuring the resources gives you more control over the server configuration. 1.2. Enabling one-way SSL/TLS for applications deployed on JBoss EAP Enable one-way SSL/TLS for applications deployed on JBoss EAP so that the communication between the applications and clients, such as web browsers, is secure. To enable one-way SSL/TLS for applications deployed on JBoss EAP, you can use the following procedures: Enabling SSL/TLS for applications by using the automatically generated self-signed certificate : Use this procedure in development or testing environments only. This procedure helps you to quickly enable SSL/TLS for applications without having to do any configurations. Enable one-way SSL/TLS for applications deployed on JBoss EAP by using the wizard : Use this procedure to quickly set up SSL/TLS using a CLI-based wizard. Elytron creates the required resources for you based on your inputs to the wizard. Enabling one-way SSL/TLS for applications by using the subsystem commands : Use this method to configure the required resource for enabling SSL/TLS manually. Manually configuring the resources gives you more control over the server configuration. Additionally, you can disable SSL/TLS for applications deployed on JBoss EAP by using the procedure Disabling SSL/TLS for applications by using the security command . 1.2.1. The default SSL context in Elytron To help developers quickly set up one-way SSL/TLS for applications, the elytron subsystem contains the required resources to enable one-way SSL/TLS, ready to use in a development or testing environment by default. The following resources are provided by default: A key-store named applicationKS . A key-manager , named applicationKM , referencing the key-store . A server-ssl-context , named applicationSSC , referencing the key-manager . Default TLS configuration ... <tls> <key-stores> <key-store name="applicationKS"> <credential-reference clear-text="password"/> <implementation type="JKS"/> <file path="application.keystore" relative-to="jboss.server.config.dir"/> </key-store> </key-stores> <key-managers> <key-manager name="applicationKM" key-store="applicationKS" generate-self-signed-certificate-host="localhost"> <credential-reference clear-text="password"/> </key-manager> </key-managers> <server-ssl-contexts> <server-ssl-context name="applicationSSC" key-manager="applicationKM"/> </server-ssl-contexts> </tls> ... The default key-manager , applicationKM , contains a generate-self-signed-certificate-host attribute with the value localhost . The generate-self-signed-certificate-host attribute indicates that when this key-manager is used to obtain the server's certificate, if the file that backs its key-store doesn't already exist, then the key-manager should automatically generate a self-signed certificate with localhost as the Common Name . This generated self-signed certificate is stored in the file that backs the key-store . As the file that backs the default key-store doesn't exist when the server is installed, just sending an https request to the server generates a self-signed certificate and enables one-way SSL/TLS for application. For more information, see Enabling SSL/TLS for applications by using the automatically generated self-signed certificate . Additional resources key-manager attributes key-store attributes server-ssl-context attributes 1.2.2. Enabling SSL/TLS for applications by using the automatically generated self-signed certificate JBoss EAP automatically generates a self-signed certificate the first time the server receives an HTTPS request. The elytron subsystem also contains key-store , key-manager , and server-ssl-context resources that are ready to use in a development or testing environment by default. Therefore, as soon as JBoss EAP generates a self-signed certificate, the applications are secured using the certificate. Important Do not use self-signed certificates in a production environment. Use only the certificates signed by a certificate authority (CA). Prerequisites JBoss EAP is running. Procedure Navigate to the server URL at the port 8443 , for example, https://localhost:8443 . JBoss EAP generates a self-signed certificate when it receives this request. You can see the server logs for details about this certificate. The browser flags the connection as insecure because the generated certificate is self-signed. Verification Compare the certificate JBoss EAP presented to the browser with the certificate in the server log. Example server log Example certificate presented to the browser If the fingerprints match, like in the example, you can proceed to the page. SSL/TLS is enabled for applications. Additional resources key-manager attributes key-store attributes server-ssl-context attributes 1.2.3. Enabling one-way SSL/TLS for applications deployed on JBoss EAP by using the wizard Elytron provides a wizard to quickly set up SSL/TLS. You can either use an existing keystore containing certificates or use the keystore and self-signed certificates that the wizard generates to enable SSL/TLS. You can also obtain and use certificates from the Let's Encrypt certificate authority by using the --lets-encrypt option. For information about Let's Encrypt, see the Let's Encrypt documentation . Use the self-signed certificates the wizard generates to enable SSL/TLS for testing and development purposes only. For production environments always use certificate authority (CA)-signed certificates. Important Do not use self-signed certificates in a production environment. Use only the certificates signed by a certificate authority (CA). The wizard configures the following resources that are required to enable SSL/TLS for applications: key-store key-manager server-ssl-context The server-ssl-context is then applied to Undertow https-listener . Elytron names each resource as resource-type-UUID . For example, key-store-9e35a3be-62bb-4fff-afc2-2d8d141b82bc. The universally unique identifier (UUID) helps avoid name collisions for the resources. Prerequisites JBoss EAP is running. Procedure Launch the wizard to configure one-way SSL/TLS for applications by entering the following command in the management CLI: Syntax Enter the required information when prompted. Use the --lets-encrypt option to obtain and use certificates from the Let's Encrypt certificate authority. If a server-ssl-context already exists, the wizard exits with the following message: Note The elytron subsystem contains an already configured server-ssl-context resource by default. Therefore, you must use the --override-ssl-context option the first time you launch the wizard after a fresh installation. For more information, see The default SSL context in Elytron . If you override the existing server-ssl-context , Elytron will use the server-ssl-context created by the wizard to enable SSL. Note To enable one-way SSL/TLS, enter n or blank when prompted to enable SSL mutual authentication. Setting mutual authentication enables two-way SSL/TLS. Example of starting the wizard Example inputs to the wizard prompts After you enter y , the server reloads with the following output: Verification Navigate to https://localhost:8443 . If you used a self-signed certificate, the browser presents a warning that the certificate presented by the server is unknown. Inspect the certificate and verify that the fingerprints shown in your browser match the fingerprints of the certificate in your keystore. You can view the certificate you generated with the following command: Syntax Example You can get the keystore name from the wizard's output, for example, "key-store is key-store-4cba6678-c464-4dcc-90ff-9295312ac395". Example output SSL/TLS is now enabled for applications deployed on JBoss EAP. Additional resources key-manager attributes key-store attributes server-ssl-context attributes 1.2.4. Enabling one-way SSL/TLS for applications by using the subsystem commands Use the elytron subsystem commands to secure the applications deployed on JBoss EAP with SSL/TLS. For testing and development purposes, you can use self-signed certificates. You can either use an existing keystore containing certificates or use the keystore that Elytron generates when you create the key-store resource. For production environments always use certificate authority (CA)-signed certificates. Important Do not use self-signed certificates in a production environment. Use only the certificates signed by a certificate authority (CA). Prerequisites JBoss EAP is running. Procedure Configure a keystore to store certificates. You can either provide a path to an existing keystore, for example, the one that contains CA-signed certificates, or provide a path to the keystore to create. Example If the keystore doesn't contain any certificates, or you used the step above to create the keystore, you must generate a certificate and store the certificate in a file. Generate a key pair in the keystore. Syntax Example Store the certificate in a file. Syntax Example Configure a key-manager referencing the key-store . Syntax Example Important Red Hat did not specify the algorithm attribute because the elytron subsystem uses KeyManagerFactory.getDefaultAlgorithm() to determine an algorithm by default. However, you can specify the algorithm attribute. To specify the algorithm attribute, you need to know what key manager algorithms are provided by the Java Development Kit (JDK) you are using. For example, a JDK that uses Java Secure Socket Extension (SunJSSE) provides the PKIX and SunX509 algorithms. In the command you could specify SunX509 as the key-manager algorithm attribute. Configure a server-ssl-context referencing the key-manager . Syntax Example Important You need to determine what SSL/TLS protocols you want to support. The example command uses TLSv1.2. For TLSv1.2 and earlier, use the cipher-suite-filter argument to specify which cipher suites are allowed. For TLSv1.3, use the cipher-suite-names argument to specify which cipher suites are allowed. TLSv1.3 is disabled by default. If you do not specify a protocol with the protocols attribute or the specified set contains TLSv1.3, configuring cipher-suite-names enables TLSv1.3. Use the use-cipher-suites-order argument to honor server cipher suite order. The use-cipher-suites-order attribute is set to true by default. This differs from the legacy security subsystem behavior, which defaults to honoring client cipher suite order. Update Undertow to use the configured server-ssl-context . Syntax Example Reload the server. Verification Navigate to https://localhost:8443 . If you used a self-signed certificate, the browser presents a warning that the certificate presented by the server is unknown. Inspect the certificate and verify that the fingerprints shown in your browser match the fingerprints of the certificate in your keystore. You can view the certificate you generated with the following command: Syntax Example Example output SSL/TLS is now enabled for applications deployed on JBoss EAP. Additional resources key-manager attributes key-store attributes server-ssl-context attributes 1.2.5. Disabling SSL/TLS for applications by using the security command Use the security command to disable SSL/TLS for applications deployed on JBoss EAP. Disabling SSL/TLS using the command does not delete the Elytron resources. The command just sets the ssl-context for the server to its default value applicationSSC . Prerequisites JBoss EAP is running. Procedure Use the security disable-ssl-http-server command in the management CLI. The server reloads with the following output: You can enable SSL/TLS for applications deployed on JBoss EAP using one of the following procedure: Enabling SSL/TLS for applications by using the automatically generated self-signed certificate : Use this procedure in development or testing environments only. This procedure helps you to quickly enable SSL/TLS for applications without having to do any configurations. Enable one-way SSL/TLS for applications deployed on JBoss EAP by using the wizard : Use this procedure to quickly set up SSL/TLS using a CLI-based wizard. Elytron creates the required resources for you based on your inputs to the wizard. Enabling one-way SSL/TLS for applications by using the subsystem commands : Use this method to configure the required resource for enabling SSL/TLS manually. Manually configuring the resources gives you more control over the server configuration. Additional resources key-manager attributes key-store attributes server-ssl-context attributes
[ "security enable-ssl-management --interactive", "SSL is already enabled for http-interface", "security enable-ssl-management --interactive", "Please provide required pieces of information to enable SSL: Certificate info: Key-store file name (default management.keystore): exampleKeystore.pkcs12 Password (blank generated): secret What is your first and last name? [Unknown]: localhost What is the name of your organizational unit? [Unknown]: What is the name of your organization? [Unknown]: What is the name of your City or Locality? [Unknown]: What is the name of your State or Province? [Unknown]: What is the two-letter country code for this unit? [Unknown]: Is CN=localhost, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct y/n [y]?y Validity (in days, blank default): 365 Alias (blank generated): localhost Enable SSL Mutual Authentication y/n (blank n):n //For one way SSL/TLS enter blank or n here SSL options: keystore file: exampleKeystore.pkcs12 distinguished name: CN=localhost, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown password: secret validity: 365 alias: localhost Server keystore file exampleKeystore.pkcs12, certificate file exampleKeystore.pem and exampleKeystore.csr file will be generated in server configuration directory. Do you confirm y/n :y", "Unable to connect due to unrecognised server certificate Subject - CN=localhost,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown Issuer - CN=localhost, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown Valid From - Mon Jan 30 23:32:20 IST 2023 Valid To - Tue Jan 30 23:32:20 IST 2024 MD5 : b6:e7:f0:57:59:9e:bf:b8:20:99:10:fc:e2:0b:0f:d0 SHA1 : 9c:f0:92:de:c1:11:df:71:0b:d7:16:02:c8:7e:c9:83:ab:e3:0c:2e Accept certificate? [N]o, [T]emporarily, [P]ermanently :", "Server reloaded. SSL enabled for http-interface ssl-context is ssl-context-a18ba30e-6a26-4ed6-87c5-feb7f3e4dff1 key-manager is key-manager-a18ba30e-6a26-4ed6-87c5-feb7f3e4dff1 key-store is key-store-a18ba30e-6a26-4ed6-87c5-feb7f3e4dff1", "cd JBOSS_HOME /standalone/configuration", "keytool -importcert -keystore <trust_store_name> -storepass <password> -alias <alias> -trustcacerts -file <file_containing_server_certificate>", "keytool -importcert -keystore client.truststore.pkcs12 -storepass secret -alias localhost -trustcacerts -file exampleKeystore.pem", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <key-stores> <key-store name=\"USD{key-store_name}\" type=\"PKCS12\" > <file name=\"USD{path_to_truststore}\"/> <key-store-clear-password password=\"USD{keystore_password}\" /> </key-store> </key-stores> <ssl-contexts> <ssl-context name=\"USD{ssl_context_name}\"> <trust-store key-store-name=\"USD{trust_store_name}\" /> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context=\"USD{ssl_context_name}\" /> </ssl-context-rules> </authentication-client> </configuration>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <key-stores> <key-store name=\"clientStore\" type=\"PKCS12\" > <file name=\" JBOSS_HOME /standalone/configuration/client.truststore.pkcs12\"/> <key-store-clear-password password=\"secret\" /> </key-store> </key-stores> <ssl-contexts> <ssl-context name=\"client-SSL-context\"> <trust-store key-store-name=\"clientStore\" /> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context=\"client-SSL-context\" /> </ssl-context-rules> </authentication-client> </configuration>", "EAP_HOME /bin/jboss-cli.sh -c --controller=remote+https://127.0.0.1:9993 -Dwildfly.config.url= <path_to_the_configuration_file> /example-security.xml :whoami", "{ \"outcome\" => \"success\", \"result\" => {\"identity\" => {\"username\" => \"USDlocal\"}} }", "/subsystem=elytron/key-store= <server_keystore_name> :read-alias(alias= <alias> )", "/subsystem=elytron/key-store=key-store-a18ba30e-6a26-4ed6-87c5-feb7f3e4dff1:read-alias(alias=\"localhost\")", "\"sha-1-digest\" => \"48:e3:6f:16:d1:af:4b:31:8f:9b:0b:7f:33:94:58:af:69:85:c 0:ea\", \"sha-256-digest\" => \"8f:3e:6b:b5:56:e0:d1:97:81:bc:f1:8d:c8:66:75:06:db:7d :4d:b6:b1:d3:34:dd:f5:6c:85:ca:c7:2b:5b:c7\",", "/subsystem=elytron/key-store= <keystore_name> :add(path= <path_to_keystore> , credential-reference= <credential_reference> , type= <keystore_type> )", "/subsystem=elytron/key-store=exampleKeyStore:add(path=exampleserver.keystore.pkcs12, relative-to=jboss.server.config.dir,credential-reference={clear-text=secret},type=PKCS12)", "/subsystem=elytron/key-store= <keystore_name> :generate-key-pair(alias= <keystore_alias> ,algorithm= <algorithm> ,key-size= <key_size> ,validity= <validity_in_days> ,credential-reference= <credential_reference> ,distinguished-name=\" <distinguished_name> \")", "/subsystem=elytron/key-store=exampleKeyStore:generate-key-pair(alias=localhost,algorithm=RSA,key-size=2048,validity=365,credential-reference={clear-text=secret},distinguished-name=\"CN=localhost\")", "/subsystem=elytron/key-store= <keystore_name> :store()", "/subsystem=elytron/key-store=exampleKeyStore:store()", "/subsystem=elytron/key-manager= <key-manager_name> :add(key-store= <key-store_name> ,credential-reference= <credential_reference> )", "/subsystem=elytron/key-manager=exampleKeyManager:add(key-store=exampleKeyStore,credential-reference={clear-text=secret})", "/subsystem=elytron/server-ssl-context= <server-ssl-context_name> :add(key-manager= <key-manager_name> , protocols= <list_of_protocols> )", "/subsystem=elytron/server-ssl-context=examplehttpsSSC:add(key-manager=exampleKeyManager, protocols=[\"TLSv1.2\"])", "/core-service=management/management-interface=http-interface:write-attribute(name=ssl-context, value= <server-ssl-context_name> ) /core-service=management/management-interface=http-interface:write-attribute(name=secure-socket-binding, value=management-https)", "/core-service=management/management-interface=http-interface:write-attribute(name=ssl-context, value=examplehttpsSSC) /core-service=management/management-interface=http-interface:write-attribute(name=secure-socket-binding, value=management-https)", "reload", "Unable to connect due to unrecognised server certificate Subject - CN=localhost Issuer - CN=localhost Valid From - Mon Jan 30 23:47:21 IST 2023 Valid To - Tue Jan 30 23:47:21 IST 2024 MD5 : a1:00:84:78:a6:46:a4:78:4d:44:c8:6d:ba:1f:30:6a SHA1 : a4:e5:c1:34:ad:e0:91:18:6f:f6:57:09:91:ae:17:8d:70:f0:1a:7d Accept certificate? [N]o, [T]emporarily, [P]ermanently :", "cd JBOSS_HOME /standalone/configuration", "keytool -export -alias <alias> -keystore <key_store> -storepass <keystore_password> -file <file_name>", "keytool -export -alias localhost -keystore exampleserver.keystore.pkcs12 -file -storepass secret server.cer", "keytool -importcert -keystore <trust_store_name> -storepass <password> -alias <alias> -trustcacerts -file <file_containing_server_certificate>", "keytool -importcert -keystore client.truststore.pkcs12 -storepass secret -alias localhost -trustcacerts -file server.cer", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <key-stores> <key-store name=\"USD{key-store_name}\" type=\"PKCS12\" > <file name=\"USD{path_to_truststore}\"/> <key-store-clear-password password=\"USD{keystore_password}\" /> </key-store> </key-stores> <ssl-contexts> <ssl-context name=\"USD{ssl_context_name}\"> <trust-store key-store-name=\"USD{trust_store_name}\" /> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context=\"USD{ssl_context_name}\" /> </ssl-context-rules> </authentication-client> </configuration>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <key-stores> <key-store name=\"clientStore\" type=\"PKCS12\" > <file name=\" JBOSS_HOME /standalone/configuration/client.truststore.pkcs12\"/> <key-store-clear-password password=\"secret\" /> </key-store> </key-stores> <ssl-contexts> <ssl-context name=\"client-SSL-context\"> <trust-store key-store-name=\"clientStore\" /> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context=\"client-SSL-context\" /> </ssl-context-rules> </authentication-client> </configuration>", "EAP_HOME /bin/jboss-cli.sh -c --controller=remote+https://127.0.0.1:9993 -Dwildfly.config.url=example-security.xml :whoami", "{ \"outcome\" => \"success\", \"result\" => {\"identity\" => {\"username\" => \"USDlocal\"}} }", "/subsystem=elytron/key-store= <server_keystore_name> :read-alias(alias= <alias> )", "/subsystem=elytron/key-store=exampleKeyStore:read-alias(alias=\"localhost\")", "\"sha-1-digest\" => \"48:e3:6f:16:d1:af:4b:31:8f:9b:0b:7f:33:94:58:af:69:85:c 0:ea\", \"sha-256-digest\" => \"8f:3e:6b:b5:56:e0:d1:97:81:bc:f1:8d:c8:66:75:06:db:7d :4d:b6:b1:d3:34:dd:f5:6c:85:ca:c7:2b:5b:c7\",", "security disable-ssl-management", "Server reloaded. Reconnected to server. SSL disabled for http-interface", "<tls> <key-stores> <key-store name=\"applicationKS\"> <credential-reference clear-text=\"password\"/> <implementation type=\"JKS\"/> <file path=\"application.keystore\" relative-to=\"jboss.server.config.dir\"/> </key-store> </key-stores> <key-managers> <key-manager name=\"applicationKM\" key-store=\"applicationKS\" generate-self-signed-certificate-host=\"localhost\"> <credential-reference clear-text=\"password\"/> </key-manager> </key-managers> <server-ssl-contexts> <server-ssl-context name=\"applicationSSC\" key-manager=\"applicationKM\"/> </server-ssl-contexts> </tls>", "17:50:24,086 WARN [org.wildfly.extension.elytron] (default task-1) WFLYELY01085: Generated self-signed certificate at /home/user1/Downloads/wildflies/wildfly-27.0.1.Final/standalone/configuration/application.keystore. Please note that self-signed certificates are not secure and should only be used for testing purposes. Do not use this self-signed certificate in production. SHA-1 fingerprint of the generated key is 11:2f:e7:8c:18:b7:2c:c1:b0:5a:ad:ea:83:e0:32:59:ba:73:91:e2 SHA-256 fingerprint of the generated key is b2:a4:ed:b0:5c:c2:a1:4c:ca:39:03:e8:3a:11:e4:c5:c4:81:9d:46:97:7c:e6:6f:0c:45:f6:5d:64:3f:0d:64", "SHA-256 Fingerprint B2 A4 ED B0 5C C2 A1 4C CA 39 03 E8 3A 11 E4 C5 C4 81 9D 46 97 7C E6 6F 0C 45 F6 5D 64 3F 0D 64 SHA-1 Fingerprint 11 2F E7 8C 18 B7 2C C1 B0 5A AD EA 83 E0 32 59 BA 73 91 E2", "security enable-ssl-http-server --interactive", "An SSL server context already exists on the HTTPS listener, use --override-ssl-context option to overwrite the existing SSL context", "security enable-ssl-http-server --interactive --override-ssl-context", "Please provide required pieces of information to enable SSL: Certificate info: Key-store file name (default default-server.keystore): exampleKeystore.pkcs12 Password (blank generated): secret What is your first and last name? [Unknown]: localhost What is the name of your organizational unit? [Unknown]: What is the name of your organization? [Unknown]: What is the name of your City or Locality? [Unknown]: What is the name of your State or Province? [Unknown]: What is the two-letter country code for this unit? [Unknown]: Is CN=localhost, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct y/n [y]?y Validity (in days, blank default): 365 Alias (blank generated): localhost Enable SSL Mutual Authentication y/n (blank n):n //For one way SSL/TLS enter blank or n here SSL options: keystore file: exampleKeystore.pkcs12 distinguished name: CN=localhost, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown password: secret validity: 365 alias: localhost Server keystore file exampleKeystore.pkcs12, certificate file exampleKeystore.pem and exampleKeystore.csr file will be generated in server configuration directory. Do you confirm y/n :y", "Server reloaded. SSL enabled for default-server ssl-context is ssl-context-4cba6678-c464-4dcc-90ff-9295312ac395 key-manager is key-manager-4cba6678-c464-4dcc-90ff-9295312ac395 key-store is key-store-4cba6678-c464-4dcc-90ff-9295312ac395", "/subsystem=elytron/key-store= <server_keystore_name> :read-alias(alias= <alias> )", "/subsystem=elytron/key-store=key-store-4cba6678-c464-4dcc-90ff-9295312ac395:read-alias(alias=\"localhost\")", "\"sha-1-digest\" => \"48:e3:6f:16:d1:af:4b:31:8f:9b:0b:7f:33:94:58:af:69:85:c 0:ea\", \"sha-256-digest\" => \"8f:3e:6b:b5:56:e0:d1:97:81:bc:f1:8d:c8:66:75:06:db:7d :4d:b6:b1:d3:34:dd:f5:6c:85:ca:c7:2b:5b:c7\",", "/subsystem=elytron/key-store= <keystore_name> :add(path= <path_to_keystore> , credential-reference= <credential_reference> , type= <keystore_type> )", "/subsystem=elytron/key-store=exampleKeyStore:add(path=exampleserver.keystore.pkcs12, relative-to=jboss.server.config.dir,credential-reference={clear-text=secret},type=PKCS12)", "/subsystem=elytron/key-store= <keystore_name> :generate-key-pair(alias= <keystore_alias> ,algorithm= <algorithm> ,key-size= <key_size> ,validity= <validity_in_days> ,credential-reference= <credential_reference> ,distinguished-name=\" <distinguished_name> \")", "/subsystem=elytron/key-store=exampleKeyStore:generate-key-pair(alias=localhost,algorithm=RSA,key-size=2048,validity=365,credential-reference={clear-text=secret},distinguished-name=\"CN=localhost\")", "/subsystem=elytron/key-store= <keystore_name> :store()", "/subsystem=elytron/key-store=exampleKeyStore:store()", "/subsystem=elytron/key-manager= <key-manager_name> :add(key-store= <key-store_name> ,credential-reference= <credential_reference> )", "/subsystem=elytron/key-manager=exampleKeyManager:add(key-store=exampleKeyStore,credential-reference={clear-text=secret})", "/subsystem=elytron/server-ssl-context= <server-ssl-context_name> :add(key-manager= <key-manager_name> , protocols= <list_of_protocols> )", "/subsystem=elytron/server-ssl-context=examplehttpsSSC:add(key-manager=exampleKeyManager, protocols=[\"TLSv1.2\"])", "/subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context, value= <server-ssl-context_name> )", "/subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context, value=examplehttpsSSC)", "reload", "/subsystem=elytron/key-store= <server_keystore_name> :read-alias(alias= <alias> )", "/subsystem=elytron/key-store=exampleKeyStore:read-alias(alias=localhost)", "\"sha-1-digest\" => \"cc:f1:82:59:c7:0d:f6:91:bc:3e:69:0a:38:fb:48:be:ec:7f:d 4:bd\", \"sha-256-digest\" => \"c0:f3:f9:8b:3c:f1:72:17:64:54:35:a6:bb:82:7e:51:b0:78 :30:cb:68:ef:04:0e:f5:2b:9d:62:ca:a7:f6:35\",", "security disable-ssl-http-server", "Server reloaded. SSL disabled for default-server" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/configuring_ssltls_in_jboss_eap/enabling-one-way-ssl-tls-for-management-interfaces-and-applications_default
Chapter 7. Installing a cluster on IBM Power Virtual Server in a restricted network
Chapter 7. Installing a cluster on IBM Power Virtual Server in a restricted network In OpenShift Container Platform 4.15, you can install a cluster on IBM Cloud(R) in a restricted network by creating an internal mirror of the installation release content on an existing Virtual Private Cloud (VPC) on IBM Cloud(R). 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in IBM Cloud(R). When installing a cluster in a restricted network, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 7.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 7.3. About using a custom VPC In OpenShift Container Platform 4.15, you can deploy a cluster into the subnets of an existing IBM(R) Virtual Private Cloud (VPC). 7.3.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 7.3.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 7.3.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 7.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 7.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.6. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 7.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select powervs as the platform to target. Select the region to deploy the cluster to. Select the zone to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.powervs field: vpcName: <existing_vpc> vpcSubnets: <vpcSubnet> For platform.powervs.vpcName , specify the name for the existing IBM Cloud(R). For platform.powervs.vpcSubnets , specify the existing subnets. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 7.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.7.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: powervs: smtLevel: 8 5 replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: powervs: smtLevel: 8 9 ibmcloud: {} replicas: 3 metadata: name: example-restricted-cluster-name 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 11 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 12 networkType: OVNKubernetes 13 serviceNetwork: - 192.168.0.0/24 platform: powervs: userid: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" 14 region: "powervs-region" vpcRegion: "vpc-region" vpcName: name-of-existing-vpc 15 vpcSubnets: 16 - name-of-existing-vpc-subnet zone: "powervs-zone" serviceInstanceID: "service-instance-id" publish: Internal credentialsMode: Manual pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 17 sshKey: ssh-ed25519 AAAA... 18 additionalTrustBundle: | 19 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 20 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 10 Required. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 8 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 5 9 The smtLevel specifies the level of SMT to set to the control plane and compute machines. The supported values are 1, 2, 4, 8, 'off' and 'on' . The default value is 8. The smtLevel 'off' sets SMT to off and smtlevel 'on' sets SMT to the default value 8 on the cluster nodes. Note When simultaneous multithreading (SMT), or hyperthreading is not enabled, one vCPU is equivalent to one physical core. When enabled, total vCPUs is computed as (Thread(s) per core * Core(s) per socket) * Socket(s). The smtLevel controls the threads per core. Lower SMT levels may require additional assigned cores when deploying the cluster nodes. You can do this by setting the 'processors' parameter in the install-config.yaml file to an appropriate value to meet the requirements for deploying OpenShift Container Platform successfully. 11 The machine CIDR must contain the subnets for the compute machines and control plane machines. 12 The CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 14 The name of an existing resource group. The existing VPC and subnets should be in this resource group. The cluster is deployed to this resource group. 15 Specify the name of an existing VPC. 16 Specify the name of the existing VPC subnet. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 17 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000. For <credentials> , specify the base64-encoded user name and password for your mirror registry. 18 You can optionally provide the sshKey value that you use to access the machines in your cluster. 19 Provide the contents of the certificate file that you used for your mirror registry. 20 Provide the imageContentSources section from the output of the command to mirror the repository. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.8. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 7.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 7.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 7.12. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 7.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 7.14. steps Customize your cluster Optional: Opt out of remote health reporting Optional: Registering your disconnected cluster
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "vpcName: <existing_vpc> vpcSubnets: <vpcSubnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: powervs: smtLevel: 8 5 replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: powervs: smtLevel: 8 9 ibmcloud: {} replicas: 3 metadata: name: example-restricted-cluster-name 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 11 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 12 networkType: OVNKubernetes 13 serviceNetwork: - 192.168.0.0/24 platform: powervs: userid: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" 14 region: \"powervs-region\" vpcRegion: \"vpc-region\" vpcName: name-of-existing-vpc 15 vpcSubnets: 16 - name-of-existing-vpc-subnet zone: \"powervs-zone\" serviceInstanceID: \"service-instance-id\" publish: Internal credentialsMode: Manual pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 17 sshKey: ssh-ed25519 AAAA... 18 additionalTrustBundle: | 19 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 20 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_power_virtual_server/installing-restricted-networks-ibm-power-vs
2.10. numad
2.10. numad numad is an automatic NUMA affinity management daemon. It monitors NUMA topology and resource usage within a system in order to dynamically improve NUMA resource allocation and management (and therefore system performance). Depending on system workload, numad can provide up to 50 percent improvements in performance benchmarks. It also provides a pre-placement advice service that can be queried by various job management systems to provide assistance with the initial binding of CPU and memory resources for their processes. numad monitors available system resources on a per-node basis by periodically accessing information in the /proc file system. It tries to maintain a specified resource usage level, and rebalances resource allocation when necessary by moving processes between NUMA nodes. numad attempts to achieve optimal NUMA performance by localizing and isolating significant processes on a subset of the system's NUMA nodes. numad primarily benefits systems with long-running processes that consume significant amounts of resources, and are contained in a subset of the total system resources. It may also benefit applications that consume multiple NUMA nodes' worth of resources; however, the benefits provided by numad decrease as the consumed percentage of system resources increases. numad is unlikely to improve performance when processes run for only a few minutes, or do not consume many resources. Systems with continuous, unpredictable memory access patterns, such as large in-memory databases, are also unlikely to benefit from using numad. For further information about using numad, see Section 6.3.5, "Automatic NUMA Affinity Management with numad" or Section A.13, "numad" , or refer to the man page:
[ "man numad" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-numad
Chapter 1. Storage APIs
Chapter 1. Storage APIs 1.1. CSIDriver [storage.k8s.io/v1] Description CSIDriver captures information about a Container Storage Interface (CSI) volume driver deployed on the cluster. Kubernetes attach detach controller uses this object to determine whether attach is required. Kubelet uses this object to determine whether pod information needs to be passed on mount. CSIDriver objects are non-namespaced. Type object 1.2. CSINode [storage.k8s.io/v1] Description CSINode holds information about all CSI drivers installed on a node. CSI drivers do not need to create the CSINode object directly. As long as they use the node-driver-registrar sidecar container, the kubelet will automatically populate the CSINode object for the CSI driver as part of kubelet plugin registration. CSINode has the same name as a node. If the object is missing, it means either there are no CSI Drivers available on the node, or the Kubelet version is low enough that it doesn't create this object. CSINode has an OwnerReference that points to the corresponding node object. Type object 1.3. CSIStorageCapacity [storage.k8s.io/v1] Description CSIStorageCapacity stores the result of one CSI GetCapacity call. For a given StorageClass, this describes the available capacity in a particular topology segment. This can be used when considering where to instantiate new PersistentVolumes. For example this can express things like: - StorageClass "standard" has "1234 GiB" available in "topology.kubernetes.io/zone=us-east1" - StorageClass "localssd" has "10 GiB" available in "kubernetes.io/hostname=knode-abc123" The following three cases all imply that no capacity is available for a certain combination: - no object exists with suitable topology and storage class name - such an object exists, but the capacity is unset - such an object exists, but the capacity is zero The producer of these objects can decide which approach is more suitable. They are consumed by the kube-scheduler when a CSI driver opts into capacity-aware scheduling with CSIDriverSpec.StorageCapacity. The scheduler compares the MaximumVolumeSize against the requested size of pending volumes to filter out unsuitable nodes. If MaximumVolumeSize is unset, it falls back to a comparison against the less precise Capacity. If that is also unset, the scheduler assumes that capacity is insufficient and tries some other node. Type object 1.4. PersistentVolume [v1] Description PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes Type object 1.5. PersistentVolumeClaim [v1] Description PersistentVolumeClaim is a user's request for and claim to a persistent volume Type object 1.6. StorageClass [storage.k8s.io/v1] Description StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned. StorageClasses are non-namespaced; the name of the storage class according to etcd is in ObjectMeta.Name. Type object 1.7. StorageState [migration.k8s.io/v1alpha1] Description The state of the storage of a specific resource. Type object 1.8. StorageVersionMigration [migration.k8s.io/v1alpha1] Description StorageVersionMigration represents a migration of stored data to the latest storage version. Type object 1.9. VolumeAttachment [storage.k8s.io/v1] Description VolumeAttachment captures the intent to attach or detach the specified volume to/from the specified node. VolumeAttachment objects are non-namespaced. Type object 1.10. VolumeSnapshot [snapshot.storage.k8s.io/v1] Description VolumeSnapshot is a user's request for either creating a point-in-time snapshot of a persistent volume, or binding to a pre-existing snapshot. Type object 1.11. VolumeSnapshotClass [snapshot.storage.k8s.io/v1] Description VolumeSnapshotClass specifies parameters that a underlying storage system uses when creating a volume snapshot. A specific VolumeSnapshotClass is used by specifying its name in a VolumeSnapshot object. VolumeSnapshotClasses are non-namespaced Type object 1.12. VolumeSnapshotContent [snapshot.storage.k8s.io/v1] Description VolumeSnapshotContent represents the actual "on-disk" snapshot object in the underlying storage system Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/storage_apis/storage-apis
Installing on GCP
Installing on GCP OpenShift Container Platform 4.12 Installing OpenShift Container Platform on Google Cloud Platform Red Hat OpenShift Documentation Team
[ "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests --dir <installation_directory>", "openshift-install version", "release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64", "oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=gcp", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component-secret> namespace: <component-namespace>", "apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "grep \"release.openshift.io/feature-set\" *", "0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade", "openshift-install create cluster --dir <installation_directory>", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 6 - control-plane-tag1 - control-plane-tag2 osImage: 7 project: example-project-name name: example-image-name replicas: 3 compute: 8 9 - hyperthreading: Enabled 10 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 11 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 12 - compute-tag1 - compute-tag2 osImage: 13 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 14 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 15 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 16 region: us-central1 17 defaultMachinePlatform: tags: 18 - global-tag1 - global-tag2 osImage: 19 project: example-project-name name: example-image-name pullSecret: '{\"auths\": ...}' 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com controlPlane: compute: platform: gcp: osImage: project: redhat-marketplace-public name: redhat-coreos-ocp-48-x86-64-202210040145", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 6 - control-plane-tag1 - control-plane-tag2 osImage: 7 project: example-project-name name: example-image-name replicas: 3 compute: 8 9 - hyperthreading: Enabled 10 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 11 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 12 - compute-tag1 - compute-tag2 osImage: 13 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 14 networking: 15 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 6 - control-plane-tag1 - control-plane-tag2 osImage: 7 project: example-project-name name: example-image-name replicas: 3 compute: 8 9 - hyperthreading: Enabled 10 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 11 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 12 - compute-tag1 - compute-tag2 osImage: 13 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 14 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 15 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 16 region: us-central1 17 defaultMachinePlatform: tags: 18 - global-tag1 - global-tag2 osImage: 19 project: example-project-name name: example-image-name network: existing_vpc 20 controlPlaneSubnet: control_plane_subnet 21 computeSubnet: compute_subnet 22 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 23 fips: false 24 sshKey: ssh-ed25519 AAAA... 25 additionalTrustBundle: | 26 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 27 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 6 - control-plane-tag1 - control-plane-tag2 osImage: 7 project: example-project-name name: example-image-name replicas: 3 compute: 8 9 - hyperthreading: Enabled 10 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 11 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 12 - compute-tag1 - compute-tag2 osImage: 13 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 14 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 15 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 16 region: us-central1 17 defaultMachinePlatform: tags: 18 - global-tag1 - global-tag2 osImage: 19 project: example-project-name name: example-image-name network: existing_vpc 20 controlPlaneSubnet: control_plane_subnet 21 computeSubnet: compute_subnet 22 pullSecret: '{\"auths\": ...}' 23 fips: false 24 sshKey: ssh-ed25519 AAAA... 25", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com credentialsMode: Passthrough 1 metadata: name: cluster_name platform: gcp: computeSubnet: shared-vpc-subnet-1 2 controlPlaneSubnet: shared-vpc-subnet-2 3 createFirewallRules: Disabled 4 network: shared-vpc 5 networkProjectID: host-project-name 6 publicDNSZone: id: public-dns-zone 7 project: host-project-name 8 projectID: service-project-name 9 region: us-east1 defaultMachinePlatform: tags: 10 - global-tag1 controlPlane: name: master platform: gcp: tags: 11 - control-plane-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: - name: worker platform: gcp: tags: 12 - compute-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA... 13", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 6 - control-plane-tag1 - control-plane-tag2 osImage: 7 project: example-project-name name: example-image-name replicas: 3 compute: 8 9 - hyperthreading: Enabled 10 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 11 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 12 - compute-tag1 - compute-tag2 osImage: 13 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 14 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 15 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 16 region: us-central1 17 defaultMachinePlatform: tags: 18 - global-tag1 - global-tag2 osImage: 19 project: example-project-name name: example-image-name network: existing_vpc 20 controlPlaneSubnet: control_plane_subnet 21 computeSubnet: compute_subnet 22 pullSecret: '{\"auths\": ...}' 23 fips: false 24 sshKey: ssh-ed25519 AAAA... 25 publish: Internal 26", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition", "gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign", "gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition", "gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m", "export MASTER_SUBNET_CIDR='10.0.0.0/17'", "export WORKER_SUBNET_CIDR='10.0.128.0/17'", "export REGION='<region>'", "export HOST_PROJECT=<host_project>", "export HOST_PROJECT_ACCOUNT=<host_service_account_email>", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: '<prefix>' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create <vpc_deployment_name> --config 01_vpc.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 1", "export HOST_PROJECT_NETWORK=<vpc_network>", "export HOST_PROJECT_CONTROL_SUBNET=<control_plane_subnet>", "export HOST_PROJECT_COMPUTE_SUBNET=<compute_subnet>", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 5 - control-plane-tag1 - control-plane-tag2 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 8 - compute-tag1 - compute-tag2 replicas: 0 metadata: name: test-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: gcp: defaultMachinePlatform: tags: 10 - global-tag1 - global-tag2 projectID: openshift-production 11 region: us-central1 12 pullSecret: '{\"auths\": ...}' fips: false 13 sshKey: ssh-ed25519 AAAA... 14 publish: Internal 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone status: {}", "config: |+ [global] project-id = example-project regional = true multizone = true node-tags = opensh-ptzzx-master node-tags = opensh-ptzzx-worker node-instance-prefix = opensh-ptzzx external-instance-groups-prefix = opensh-ptzzx network-project-id = example-shared-vpc network-name = example-network subnetwork-name = example-worker-subnet", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External type: LoadBalancerService status: availableReplicas: 0 domain: '' selector: ''", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' 1 export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' 2 export NETWORK_CIDR='10.0.0.0/16' export KUBECONFIG=<installation_directory>/auth/kubeconfig 3 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json`", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{HOST_PROJECT_NETWORK} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_CONTROL_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} projects add-iam-policy-binding USD{HOST_PROJECT} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkViewer\"", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_CONTROL_SUBNET}\" --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_CONTROL_SUBNET}\" --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_COMPUTE_SUBNET}\" --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_COMPUTE_SUBNET}\" --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition", "gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign", "gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition", "gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_COMPUTE_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "oc get events -n openshift-ingress --field-selector=\"reason=LoadBalancerManualChange\"", "Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description \"{\\\"kubernetes.io/service-name\\\":\\\"openshift-ingress/router-default\\\", \\\"kubernetes.io/service-ip\\\":\\\"35.237.236.234\\\"}\\\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project`", "gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress-hc --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges=\"0.0.0.0/0\" --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges=USD{NETWORK_CIDR} --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition", "gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign", "gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition", "gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --credentials-requests --cloud=gcp --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 1 USDRELEASE_IMAGE", "ccoctl gcp delete --name=<name> \\ 1 --project=<gcp_project_id> \\ 2 --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/installing_on_gcp/index
probe::netfilter.bridge.post_routing
probe::netfilter.bridge.post_routing Name probe::netfilter.bridge.post_routing - - Called before a bridging packet hits the wire Synopsis netfilter.bridge.post_routing Values llcproto_stp Constant used to signify Bridge Spanning Tree Protocol packet pf Protocol family -- always " bridge " indev Address of net_device representing input device, 0 if unknown nf_drop Constant used to signify a 'drop' verdict br_msg Message age in 1/256 secs nf_queue Constant used to signify a 'queue' verdict br_mac Bridge MAC address br_fd Forward delay in 1/256 secs brhdr Address of bridge header br_htime Hello time in 1/256 secs br_bid Identity of bridge br_rmac Root bridge MAC address protocol Packet protocol br_prid Protocol identifier br_type BPDU type nf_stop Constant used to signify a 'stop' verdict br_max Max age in 1/256 secs br_rid Identity of root bridge br_flags BPDU flags outdev_name Name of network device packet will be routed to (if known) nf_accept Constant used to signify an 'accept' verdict indev_name Name of network device packet was received on (if known) br_poid Port identifier outdev Address of net_device representing output device, 0 if unknown nf_repeat Constant used to signify a 'repeat' verdict br_vid Protocol version identifier length The length of the packet buffer contents, in bytes nf_stolen Constant used to signify a 'stolen' verdict br_cost Total cost from transmitting bridge to root llcpdu Address of LLC Protocol Data Unit
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-netfilter-bridge-post-routing
Using automated rules on Cryostat
Using automated rules on Cryostat Red Hat build of Cryostat 2 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_automated_rules_on_cryostat/index
1.3.6. Committing Changes
1.3.6. Committing Changes To share your changes with others and commit them to a CVS repository, change to the directory with its working copy and run the following command: cvs commit [ -m " commit message " ] Note that unless you specify the commit message on the command line, CVS opens an external text editor ( vi by default) for you to write it. For information on how to determine which editor to start, see Section 1.3.1, "Installing and Configuring CVS" . Example 1.22. Committing changes to a CVS repository Imagine that the directory with your working copy of a CVS repository has the following contents: In this working copy, ChangeLog is scheduled for addition to the CVS repository, Makefile already is under revision control and contains local changes, and the TODO file, which is also under revision control, has been scheduled for removal and is no longer present in the working copy. To commit these changes to the CVS repository, type:
[ "project]USD ls AUTHORS ChangeLog CVS doc INSTALL LICENSE Makefile README src", "project]USD cvs commit -m \"Updated the makefile.\" cvs commit: Examining . cvs commit: Examining doc RCS file: /home/john/cvsroot/project/ChangeLog,v done Checking in ChangeLog; /home/john/cvsroot/project/ChangeLog,v <-- ChangeLog initial revision: 1.1 done Checking in Makefile; /home/john/cvsroot/project/Makefile,v <-- Makefile new revision: 1.2; previous revision: 1.1 done Removing TODO; /home/john/cvsroot/project/TODO,v <-- TODO new revision: delete; previous revision: 1.1.1.1 done" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/sect-Revision_Control_Systems-CVS-Commit
SystemTap Beginners Guide
SystemTap Beginners Guide Red Hat Enterprise Linux 7 Introduction to SystemTap William Cohen Red Hat Software Engineering [email protected] Don Domingo Red Hat Customer Content Services Vladimir Slavik Red Hat Customer Content Services [email protected] Robert Kratky Red Hat Customer Content Services Jacquelynn East Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_beginners_guide/index
Chapter 4. Supported operating systems and architectures
Chapter 4. Supported operating systems and architectures .NET 9.0 is available for OpenShift Container Platform, Red Hat Enterprise Linux 8.10 and later, Red Hat Enterprise Linux 9.5 and later and Red Hat Enterprise Linux 10.0 and later .NET 9.0 is available on the x64_64 (64-bit Intel/AMD), aarch64 (64-bit ARM), ppc64le (64-bit IBM Power), and s390x (64-bit IBM Z) architectures. .NET 9.0 is available for Red Hat Enterprise Linux 8 and later Table 4.1. Supported deployment environments for .NET 9.0 Platform Architecture RPM Repository Red Hat Enterprise Linux 8 AMD64 and Intel 64 ( x86_64 ) IBM Z and LinuxONE ( s390x ) 64-bit ARM ( aarch64 ) IBM Power ( ppc64le ) dotnet-sdk-9.0 Appstream NOTE: The AppStream repositories are enabled by default in Red Hat Enterprise Linux 8. Red Hat Enterprise Linux 9 AMD64 and Intel 64 ( x86_64 ) IBM Z and LinuxONE ( s390x ) 64-bit Arm ( aarch64 ) IBM Power ( ppc64le ) dotnet-sdk-9.0 Appstream Red Hat Enterprise Linux 10 AMD64 and Intel 64 ( x86_64 ) IBM Z and LinuxONE ( s390x ) 64-bit ARM ( aarch64 ) IBM Power ( ppc64le ) dotnet-sdk-9.0 OpenShift Container Platform 4 AMD64 and Intel 64 ( x86_64 ) 64-bit ARM ( aarch64 ) IBM Power ( ppc64le ) IBM Z and LinuxONE ( s390x )
null
https://docs.redhat.com/en/documentation/net/9.0/html/release_notes_for_.net_9.0_rpm_packages/supported-operating-systems-and-architecture_release-notes-for-dotnet-rpms
Chapter 15. Replacing storage devices
Chapter 15. Replacing storage devices 15.1. Replacing operational or failed storage devices on Red Hat OpenStack Platform installer-provisioned infrastructure Use this procedure to replace storage device in OpenShift Data Foundation which is deployed on Red Hat OpenStack Platform. This procedure helps to create a new Persistent Volume Claim (PVC) on a new volume and remove the old object storage device (OSD). Procedure Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-6d77d6c7c6-m8xj6 needs to be replaced and compute-2 is the OpenShift Container platform node on which the OSD is scheduled. Note If the OSD to be replaced is healthy, the status of the pod will be Running . Scale down the OSD deployment for the OSD to be replaced. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Note If the rook-ceph-osd pod is in terminating state, use the force option to delete the pod. Example output: Incase, the persistent volume associated with the failed OSD fails, get the failed persistent volumes details and delete them using the following commands: Remove the old OSD from the cluster so that a new OSD can be added. Delete any old ocs-osd-removal jobs. Example output: Change to the openshift-storage project. Remove the old OSD from the cluster. You can add comma separated OSD IDs in the command to remove more than one OSD. (For example, FAILED_OSD_IDS=0,1,2). The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSD devices that are removed from the respective OpenShift Data Foundation nodes. Get PVC name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod : For example: For each of the nodes identified in step #1, do the following: Create a debug pod and chroot to the host on the storage node. Find relevant device name based on the PVC names identified in the step Remove the mapped device. Note If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find PID of the process which was stuck. Terminate the process using kill command. Verify that the device name is removed. Delete the ocs-osd-removal job. Example output: Verfication steps Verify that there is a new OSD running. Example output: Verify that there is a new PVC created which is in Bound state. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in step, do the following: Create a debug pod and open a chroot environment for the selected host(s). Run "lsblk" and check for the "crypt" keyword beside the ocs-deviceset name(s) Log in to OpenShift Web Console and view the storage dashboard. Figure 15.1. OSD status in OpenShift Container Platform storage dashboard after device replacement
[ "oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide", "rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none> rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 24h 10.128.2.24 compute-0 <none> <none> rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 24h 10.130.0.18 compute-1 <none> <none>", "osd_id_to_remove=0 oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0", "deployment.extensions/rook-ceph-osd-0 scaled", "oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}", "No resources found.", "oc delete pod rook-ceph-osd-0-6d77d6c7c6-m8xj6 --force --grace-period=0", "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-6d77d6c7c6-m8xj6\" force deleted", "oc get pv oc delete pv <failed-pv-name>", "oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}", "job.batch \"ocs-osd-removal-0\" deleted", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} -p FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'", "2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"", "oc debug node/<node name> chroot /host", "sh-4.4# dmsetup ls| grep <pvc name> ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)", "cryptsetup luksClose --debug --verbose ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt", "ps -ef | grep crypt", "kill -9 <PID>", "dmsetup ls", "oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}", "job.batch \"ocs-osd-removal-0\" deleted", "oc get -n openshift-storage pods -l app=rook-ceph-osd", "rook-ceph-osd-0-5f7f4747d4-snshw 1/1 Running 0 4m47s rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 1d20h rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 1d20h", "oc get -n openshift-storage pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-b44ebb5e-3c67-4000-998e-304752deb5a7 50Gi RWO ocs-storagecluster-ceph-rbd 6d ocs-deviceset-0-data-0-gwb5l Bound pvc-bea680cd-7278-463d-a4f6-3eb5d3d0defe 512Gi RWO standard 94s ocs-deviceset-1-data-0-w9pjm Bound pvc-01aded83-6ef1-42d1-a32e-6ca0964b96d4 512Gi RWO standard 6d ocs-deviceset-2-data-0-7bxcq Bound pvc-5d07cd6c-23cb-468c-89c1-72d07040e308 512Gi RWO standard 6d", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/_<OSD-pod-name>_", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/<node name> chroot /host", "lsblk" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/replacing_storage_devices
Chapter 2. Configure User Access to manage integrations
Chapter 2. Configure User Access to manage integrations To configure cloud and Red Hat integrations, you must be a member of a group with the Cloud Administrator role. This group must be configured in User Access by an Organization Administrator. In the Red Hat Hybrid Cloud Console, an Organization Administrator performs the following high-level steps: Create a User Access group for cloud administrators. Add the Cloud Administrator role to the group. Add members (users with account access) to the group. Organization Administrator The Organization Administrator configures the User Access group for cloud administrators, then adds the Cloud Administrator role and users to the group. Cloud Administrator The Cloud Administrator configures how services interact with cloud and Red Hat integrations. The Cloud Administrator can add, remove, and edit integrations available in the Hybrid Cloud Console. Additional resources To learn more about User Access on the Hybrid Cloud Console, see the User Access Configuration Guide for Role-based Access Control (RBAC) . 2.1. Creating and configuring a Cloud Administrator group in the Hybrid Cloud Console An Organization Administrator of a Red Hat account creates a group with the Cloud Administrator role and adds members to the group. The members of this group can manage cloud and Red Hat integrations on the Hybrid Cloud Console. Prerequisites You are logged in to the Hybrid Cloud Console as a user who has Organization Administrator permission. If you are not an Organization Administrator, you must be a member of a group that has the User Access administrator role assigned to it. Procedure Click Settings > Identity & Access Management . Under Identity & Access Management , click User Access > Groups . Click Create group . Enter a group name, for example, Cloud Administrators , and a description, and then click . Find Cloud Administrator in the list of roles, select the checkbox to it, and then click . Add members to the group: Search for individual users or filter by username, email, or status. Select the checkbox for the users you want to add to the group, then click . Review the details and click Submit to finish creating the group. Verification Verify that your new group is listed on the Groups page. 2.2. Editing or removing a User Access group You can make changes to an existing User Access group in the Red Hat Hybrid Cloud Console and you can delete groups that are no longer needed. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console and meet one of the following criteria: You are a user with Organization Administrator permissions. You are a member of a group that has the User Access administrator role assigned to it. Procedure Navigate to Red Hat Hybrid Cloud Console > Settings > Identity & Access Management > User Access > Groups . Click the options icon (...) on the far right of the group name row, and then click Edit or Delete . Make and save changes or delete the group.
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/configuring_cloud_integrations_for_red_hat_services/assembly-config-user-access-integrations_crc-cloud-integrations
Chapter 1. Overview of images
Chapter 1. Overview of images 1.1. Understanding containers, images, and image streams Containers, images, and image streams are important concepts to understand when you set out to create and manage containerized software. An image holds a set of software that is ready to run, while a container is a running instance of a container image. An image stream provides a way of storing different versions of the same basic image. Those different versions are represented by different tags on the same image name. 1.2. Images Containers in OpenShift Container Platform are based on OCI- or Docker-formatted container images . An image is a binary that includes all of the requirements for running a single container, as well as metadata describing its needs and capabilities. You can think of it as a packaging technology. Containers only have access to resources defined in the image unless you give the container additional access when creating it. By deploying the same image in multiple containers across multiple hosts and load balancing between them, OpenShift Container Platform can provide redundancy and horizontal scaling for a service packaged into an image. You can use the podman or docker CLI directly to build images, but OpenShift Container Platform also supplies builder images that assist with creating new images by adding your code or configuration to existing images. Because applications develop over time, a single image name can actually refer to many different versions of the same image. Each different image is referred to uniquely by its hash, a long hexadecimal number such as fd44297e2ddb050ec4f... , which is usually shortened to 12 characters, such as fd44297e2ddb . You can create , manage , and use container images. 1.3. Image registry An image registry is a content server that can store and serve container images. For example: registry.redhat.io A registry contains a collection of one or more image repositories, which contain one or more tagged images. Red Hat provides a registry at registry.redhat.io for subscribers. OpenShift Container Platform can also supply its own OpenShift image registry for managing custom container images. 1.4. Image repository An image repository is a collection of related container images and tags identifying them. For example, the OpenShift Container Platform Jenkins images are in the repository: docker.io/openshift/jenkins-2-centos7 1.5. Image tags An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here :v3.11.59-2 is the tag: registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2 You can add additional tags to an image. For example, an image might be assigned the tags :v3.11.59-2 and :latest . OpenShift Container Platform provides the oc tag command, which is similar to the docker tag command, but operates on image streams instead of directly on images. 1.6. Image IDs An image ID is a SHA (Secure Hash Algorithm) code that can be used to pull an image. A SHA image ID cannot change. A specific SHA identifier always references the exact same container image content. For example: docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324 1.7. Containers The basic units of OpenShift Container Platform applications are called containers. Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. The word container is defined as a specific running or paused instance of a container image. Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service, often called a micro-service, such as a web server or a database, though containers can be used for arbitrary workloads. The Linux kernel has been incorporating capabilities for container technologies for years. The Docker project developed a convenient management interface for Linux containers on a host. More recently, the Open Container Initiative has developed open standards for container formats and container runtimes. OpenShift Container Platform and Kubernetes add the ability to orchestrate OCI- and Docker-formatted containers across multi-host installations. Though you do not directly interact with container runtimes when using OpenShift Container Platform, understanding their capabilities and terminology is important for understanding their role in OpenShift Container Platform and how your applications function inside of containers. Tools such as podman can be used to replace docker command-line tools for running and managing containers directly. Using podman , you can experiment with containers separately from OpenShift Container Platform. 1.8. Why use imagestreams An image stream and its associated tags provide an abstraction for referencing container images from within OpenShift Container Platform. The image stream and its tags allow you to see what images are available and ensure that you are using the specific image you need even if the image in the repository changes. Image streams do not contain actual image data, but present a single virtual view of related images, similar to an image repository. You can configure builds and deployments to watch an image stream for notifications when new images are added and react by performing a build or deployment, respectively. For example, if a deployment is using a certain image and a new version of that image is created, a deployment could be automatically performed to pick up the new version of the image. However, if the image stream tag used by the deployment or build is not updated, then even if the container image in the container image registry is updated, the build or deployment continues using the , presumably known good image. The source images can be stored in any of the following: OpenShift Container Platform's integrated registry. An external registry, for example registry.redhat.io or quay.io. Other image streams in the OpenShift Container Platform cluster. When you define an object that references an image stream tag, such as a build or deployment configuration, you point to an image stream tag and not the repository. When you build or deploy your application, OpenShift Container Platform queries the repository using the image stream tag to locate the associated ID of the image and uses that exact image. The image stream metadata is stored in the etcd instance along with other cluster information. Using image streams has several significant benefits: You can tag, rollback a tag, and quickly deal with images, without having to re-push using the command line. You can trigger builds and deployments when a new image is pushed to the registry. Also, OpenShift Container Platform has generic triggers for other resources, such as Kubernetes objects. You can mark a tag for periodic re-import. If the source image has changed, that change is picked up and reflected in the image stream, which triggers the build or deployment flow, depending upon the build or deployment configuration. You can share images using fine-grained access control and quickly distribute images across your teams. If the source image changes, the image stream tag still points to a known-good version of the image, ensuring that your application do not break unexpectedly. You can configure security around who can view and use the images through permissions on the image stream objects. Users that lack permission to read or list images on the cluster level can still retrieve the images tagged in a project using image streams. You can manage image streams, use image streams with Kubernetes resources , and trigger updates on image stream updates . 1.9. Image stream tags An image stream tag is a named pointer to an image in an image stream. An image stream tag is similar to a container image tag. 1.10. Image stream images An image stream image allows you to retrieve a specific container image from a particular image stream where it is tagged. An image stream image is an API resource object that pulls together some metadata about a particular image SHA identifier. 1.11. Image stream triggers An image stream trigger causes a specific action when an image stream tag changes. For example, importing can cause the value of the tag to change, which causes a trigger to fire when there are deployments, builds, or other resources listening for those. 1.12. How you can use the Cluster Samples Operator During the initial startup, the Operator creates the default samples resource to initiate the creation of the image streams and templates. You can use the Cluster Samples Operator to manage the sample image streams and templates stored in the openshift namespace. As a cluster administrator, you can use the Cluster Samples Operator to: Configure the Operator . Use the Operator with an alternate registry . 1.13. About templates A template is a definition of an object to be replicated. You can use templates to build and deploy configurations. 1.14. How you can use Ruby on Rails As a developer, you can use Ruby on Rails to: Write your application: Set up a database. Create a welcome page. Configure your application for OpenShift Container Platform. Store your application in Git. Deploy your application in OpenShift Container Platform: Create the database service. Create the frontend service. Create a route for your application.
[ "registry.redhat.io", "docker.io/openshift/jenkins-2-centos7", "registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2", "docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/images/overview-of-images
File System Guide
File System Guide Red Hat Ceph Storage 7 Configuring and Mounting Ceph File Systems Red Hat Ceph Storage Documentation Team
[ "cephadm shell", "ceph fs volume create FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph fs volume create test --placement=\"2 host01 host02\"", "ceph osd pool create DATA_POOL [ PG_NUM ] ceph osd pool create METADATA_POOL [ PG_NUM ]", "ceph osd pool create cephfs_data 64 ceph osd pool create cephfs_metadata 64", "ceph fs new FILESYSTEM_NAME METADATA_POOL DATA_POOL", "ceph fs new test cephfs_metadata cephfs_data", "ceph orch apply mds FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mds test --placement=\"2 host01 host02\"", "ceph orch ls", "ceph fs ls ceph fs status", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mds", "touch mds.yaml", "service_type: mds service_id: FILESYSTEM_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3", "service_type: mds service_id: fs_name placement: hosts: - host01 - host02", "cephadm shell --mount mds.yaml:/var/lib/ceph/mds/mds.yaml", "cd /var/lib/ceph/mds/", "cephadm shell", "cd /var/lib/ceph/mds/", "ceph orch apply -i FILE_NAME .yaml", "ceph orch apply -i mds.yaml", "ceph fs new CEPHFS_NAME METADATA_POOL DATA_POOL", "ceph fs new test metadata_pool data_pool", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mds", "cephadm shell", "ceph config set mon mon_allow_pool_delete true", "ceph fs volume rm FILESYSTEM_NAME --yes-i-really-mean-it", "ceph fs volume rm cephfs-new --yes-i-really-mean-it", "ceph orch ls", "ceph orch rm SERVICE_NAME", "ceph orch rm mds.test", "ceph orch ps", "ceph orch ps", "ceph fs dump dumped fsmap epoch 399 Filesystem 'cephfs01' (27) e399 max_mds 1 in 0 up {0=20384} failed damaged stopped [mds.a{0:20384} state up:active seq 239 addr [v2:127.0.0.1:6854/966242805,v1:127.0.0.1:6855/966242805]] Standby daemons: [mds.b{-1:10420} state up:standby seq 2 addr [v2:127.0.0.1:6856/2745199145,v1:127.0.0.1:6857/2745199145]]", "ceph config set STANDBY_DAEMON mds_join_fs FILE_SYSTEM_NAME", "ceph config set mds.b mds_join_fs cephfs01", "ceph fs dump dumped fsmap epoch 405 e405 Filesystem 'cephfs01' (27) max_mds 1 in 0 up {0=10420} failed damaged stopped [mds.b{0:10420} state up:active seq 274 join_fscid=27 addr [v2:127.0.0.1:6856/2745199145,v1:127.0.0.1:6857/2745199145]] 1 Standby daemons: [mds.a{-1:10720} state up:standby seq 2 addr [v2:127.0.0.1:6854/1340357658,v1:127.0.0.1:6855/1340357658]]", "ceph fs set NAME max_mds NUMBER", "ceph fs set cephfs max_mds 2", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients ====== +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | | 1 | active | node2 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------+--------+ +-----------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | STANDBY MDS | +-------------+ | node3 | +-------------+", "ceph fs set FS_NAME standby_count_wanted NUMBER", "ceph fs set cephfs standby_count_wanted 2", "ceph fs set FS_NAME allow_standby_replay 1", "ceph fs set cephfs allow_standby_replay 1", "setfattr -n ceph.dir.pin.distributed -v 1 DIRECTORY_PATH", "setfattr -n ceph.dir.pin.distributed -v 1 dir1/", "setfattr -n ceph.dir.pin.random -v PERCENTAGE_IN_DECIMAL DIRECTORY_PATH", "setfattr -n ceph.dir.pin.random -v 0.01 dir1/", "getfattr -n ceph.dir.pin.random DIRECTORY_PATH getfattr -n ceph.dir.pin.distributed DIRECTORY_PATH", "getfattr -n ceph.dir.pin.distributed dir1/ file: dir1/ ceph.dir.pin.distributed=\"1\" getfattr -n ceph.dir.pin.random dir1/ file: dir1/ ceph.dir.pin.random=\"0.01\"", "ceph tell mds.a get subtrees | jq '.[] | [.dir.path, .auth_first, .export_pin]'", "setfattr -n ceph.dir.pin.distributed -v 0 DIRECTORY_PATH", "setfattr -n ceph.dir.pin.distributed -v 0 dir1/", "getfattr -n ceph.dir.pin.distributed DIRECTORY_PATH", "getfattr -n ceph.dir.pin.distributed dir1/", "setfattr -n ceph.dir.pin -v -1 DIRECTORY_PATH", "setfattr -n ceph.dir.pin -v -1 dir1/", "mkdir -p a/b 1 setfattr -n ceph.dir.pin -v 1 a/ 2 setfattr -n ceph.dir.pin -v 0 a/b 3", "setfattr -n ceph.dir.pin -v RANK PATH_TO_DIRECTORY", "setfattr -n ceph.dir.pin -v 2 cephfs/home", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | | 1 | active | node2 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------+--------+ +-----------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | +-------------+", "ceph fs set NAME max_mds NUMBER", "ceph fs set cephfs max_mds 1", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------|--------+ +-----------------+----------+-------+-------+ | POOl | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | | node2 | +-------------+", "ceph orch ps | grep mds", "ceph tell MDS_SERVICE_NAME counter dump", "ceph tell mds.cephfs.ceph2-hk-n-0mfqao-node4.isztbk counter dump [ { \"key\": \"mds_client_metrics\", \"value\": [ { \"labels\": { \"fs_name\": \"cephfs\", \"id\": \"24379\" }, \"counters\": { \"num_clients\": 4 } } ] }, { \"key\": \"mds_client_metrics-cephfs\", \"value\": [ { \"labels\": { \"client\": \"client.24413\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 56, \"cap_miss\": 9, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 2, \"dentry_lease_miss\": 12, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 4, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 0, \"total_write_size\": 0 } }, { \"labels\": { \"client\": \"client.24502\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 921403, \"cap_miss\": 102382, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 17117, \"dentry_lease_miss\": 204710, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 7, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 1, \"total_write_size\": 132 } }, { \"labels\": { \"client\": \"client.24508\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 928694, \"cap_miss\": 103183, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 17217, \"dentry_lease_miss\": 206348, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 7, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 1, \"total_write_size\": 132 } }, { \"labels\": { \"client\": \"client.24520\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 56, \"cap_miss\": 9, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 2, \"dentry_lease_miss\": 12, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 4, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 0, \"total_write_size\": 0 } } ] } ]", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rwp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a client.1 key: AQAz7EVWygILFRAAdIcuJ11opU/JKyfFmxhuaw== caps: [mds] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow r network 10.0.0.0/8, allow rw path=/bar network 10.0.0.0/8 caps: [mon] allow r network 10.0.0.0/8 caps: [osd] allow rw tag cephfs data=cephfs_a network 10.0.0.0/8", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-fuse", "scp root@ MONITOR_NODE_NAME :/etc/ceph/ KEYRING_FILE /etc/ceph/", "scp [email protected]:/etc/ceph/ceph.client.1.keyring /etc/ceph/", "scp root@ MONITOR_NODE_NAME :/etc/ceph/ceph.conf /etc/ceph/ceph.conf", "scp [email protected]:/etc/ceph/ceph.conf /etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "ceph fs volume create FILE_SYSTEM_NAME", "ceph fs volume create cephfs01", "ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME DIRECTORY PERMISSIONS", "ceph fs authorize cephfs01 client.1 / rw [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== ceph auth get client.1 exported keyring for client.1 [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== caps mds = \"allow rw fsname=cephfs01\" caps mon = \"allow r fsname=cephfs01\" caps osd = \"allow rw tag cephfs data=cephfs01\"", "ceph fs authorize cephfs01 client.1 / rw root_squash /volumes rw [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== ceph auth get client.1 [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== caps mds = \"allow rw fsname=cephfs01 root_squash, allow rw fsname=cephfs01 path=/volumes\" caps mon = \"allow r fsname=cephfs01\" caps osd = \"allow rw tag cephfs data=cephfs01\"", "ceph auth get CLIENT_NAME > OUTPUT_FILE_NAME scp OUTPUT_FILE_NAME TARGET_NODE_NAME :/etc/ceph", "ceph auth get client.1 > ceph.client.1.keyring exported keyring for client.1 scp ceph.client.1.keyring client:/etc/ceph root@client's password: ceph.client.1.keyring 100% 178 333.0KB/s 00:00", "mkdir PATH_TO_NEW_DIRECTORY_NAME", "mkdir /mnt/mycephfs", "ceph-fuse PATH_TO_NEW_DIRECTORY_NAME -n CEPH_USER_NAME --client-fs=_FILE_SYSTEM_NAME", "ceph-fuse /mnt/mycephfs/ -n client.1 --client-fs=cephfs01 ceph-fuse[555001]: starting ceph client 2022-05-09T07:33:27.158+0000 7f11feb81200 -1 init, newargv = 0x55fc4269d5d0 newargc=15 ceph-fuse[555001]: starting fuse", "ceph osd pool create DATA_POOL_NAME erasure", "ceph osd pool create cephfs-data-ec01 erasure pool 'cephfs-data-ec01' created", "ceph osd lspools", "ceph osd pool set DATA_POOL_NAME allow_ec_overwrites true", "ceph osd pool set cephfs-data-ec01 allow_ec_overwrites true set pool 15 allow_ec_overwrites to true", "ceph fs status FILE_SYSTEM_NAME", "ceph fs status cephfs-ec cephfs-ec - 14 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921 POOL TYPE USED AVAIL cephfs-metadata-ec metadata 787M 8274G cephfs-data-ec data 2360G 12.1T STANDBY MDS cephfs-ec.example.irsrql cephfs-ec.example.cauuaj", "ceph fs add_data_pool FILE_SYSTEM_NAME DATA_POOL_NAME", "ceph fs add_data_pool cephfs-ec cephfs-data-ec01", "ceph fs status FILE_SYSTEM_NAME", "ceph fs status cephfs-ec cephfs-ec - 14 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921 POOL TYPE USED AVAIL cephfs-metadata-ec metadata 787M 8274G cephfs-data-ec data 2360G 12.1T cephfs-data-ec01 data 0 12.1T STANDBY MDS cephfs-ec.example.irsrql cephfs-ec.example.cauuaj", "mkdir PATH_TO_DIRECTORY setfattr -n ceph.dir.layout.pool -v DATA_POOL_NAME PATH_TO_DIRECTORY", "mkdir /mnt/cephfs/newdir setfattr -n ceph.dir.layout.pool -v cephfs-data-ec01 /mnt/cephfs/newdir", "cephadm shell", "ceph fs authorize FILE_SYSTEM_NAME client. CLIENT_NAME / DIRECTORY CAPABILITY [/ DIRECTORY CAPABILITY ] PERMISSIONS", "ceph fs authorize cephfs_a client.1 / r /temp rw client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==", "ceph fs authorize cephfs_a client.1 /temp rw", "ceph auth get client. ID", "ceph auth get client.1 client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A== caps mds = \"allow r, allow rw path=/temp\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\"", "ceph auth get client. ID -o ceph.client. ID .keyring", "ceph auth get client.1 -o ceph.client.1.keyring exported keyring for client.1", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "chmod 644 ceph.client. ID .keyring", "chmod 644 /etc/ceph/ceph.client.1.keyring", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-common", "cephadm shell", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "scp /etc/ceph/ceph.conf root@ CLIENT_NODE_NAME :/etc/ceph/ceph.conf", "scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "mkdir -p MOUNT_POINT", "mkdir -p /mnt/cephfs", "mount -t ceph MONITOR-1_NAME :6789, MONITOR-2_NAME :6789, MONITOR-3_NAME :6789:/ MOUNT_POINT -o name= CLIENT_ID ,fs= FILE_SYSTEM_NAME", "mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1,fs=cephfs01", "mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o nowsync,name=1,fs=cephfs01", "stat -f MOUNT_POINT", "stat -f /mnt/cephfs", "mkdir -p MOUNT_POINT", "mkdir -p /mnt/cephfs", "#DEVICE PATH TYPE OPTIONS MON_0_HOST : PORT , MOUNT_POINT ceph name= CLIENT_ID , MON_1_HOST : PORT , ceph.client_mountpoint=/ VOL / SUB_VOL_GROUP / SUB_VOL / UID_SUB_VOL , fs= FILE_SYSTEM_NAME , MON_2_HOST : PORT :/q[_VOL_]/ SUB_VOL / UID_SUB_VOL , [ ADDITIONAL_OPTIONS ]", "#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/cephfs ceph name=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ fs=cephfs01, _netdev,noatime", "subscription-manager repos --enable=6-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-fuse", "cephadm shell", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "scp /etc/ceph/ceph.conf root@ CLIENT_NODE_NAME :/etc/ceph/ceph.conf", "scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "mkdir PATH_TO_MOUNT_POINT", "mkdir /mnt/mycephfs", "ceph-fuse -n client. CLIENT_ID --client_fs FILE_SYSTEM_NAME MOUNT_POINT", "ceph-fuse -n client.1 --client_fs cephfs01 /mnt/mycephfs", "ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfs", "ceph-fuse -n client. CLIENT_ID MOUNT_POINT -r PATH", "ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfs", "ceph-fuse -n client.1 /mnt/cephfs --client_reconnect_stale=true", "stat -f MOUNT_POINT", "stat -f /mnt/cephfs", "mkdir PATH_TO_MOUNT_POINT", "mkdir /mnt/mycephfs", "#DEVICE PATH TYPE OPTIONS DUMP FSCK HOST_NAME : PORT , MOUNT_POINT fuse.ceph ceph.id= CLIENT_ID , 0 0 HOST_NAME : PORT , ceph.client_mountpoint=/ VOL / SUB_VOL_GROUP / SUB_VOL / UID_SUB_VOL , HOST_NAME : PORT :/ ceph.client_fs= FILE_SYSTEM_NAME ,ceph.name= USERNAME ,ceph.keyring=/etc/ceph/ KEYRING_FILE , [ ADDITIONAL_OPTIONS ]", "#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/mycephfs fuse.ceph ceph.id=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ ceph.client_fs=cephfs01,ceph.name=client.1,ceph.keyring=/etc/ceph/client1.keyring, _netdev,defaults", "ceph fs volume create VOLUME_NAME", "ceph fs volume create cephfs", "ceph fs volume ls", "ceph fs volume info VOLUME_NAME", "ceph fs volume info cephfs { \"mon_addrs\": [ \"192.168.1.7:40977\", ], \"pending_subvolume_deletions\": 0, \"pools\": { \"data\": [ { \"avail\": 106288709632, \"name\": \"cephfs.cephfs.data\", \"used\": 4096 } ], \"metadata\": [ { \"avail\": 106288709632, \"name\": \"cephfs.cephfs.meta\", \"used\": 155648 } ] }, \"used_size\": 0 }", "ceph config set mon mon_allow_pool_delete true", "ceph fs volume rm VOLUME_NAME [--yes-i-really-mean-it]", "ceph fs volume rm cephfs --yes-i-really-mean-it", "ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE ]", "ceph fs subvolumegroup create cephfs subgroup0", "ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--size SIZE_IN_BYTES ] [--pool_layout DATA_POOL_NAME ] [--uid UID ] [--gid GID ] [--mode OCTAL_MODE ]", "ceph fs subvolumegroup create cephfs subvolgroup_2 10737418240", "ceph fs subvolumegroup resize VOLUME_NAME GROUP_NAME new_size [--no_shrink]", "ceph fs subvolumegroup resize cephfs subvolgroup_2 20737418240 [ { \"bytes_used\": 10768679044 }, { \"bytes_quota\": 20737418240 }, { \"bytes_pcent\": \"51.93\" } ]", "ceph fs subvolumegroup info VOLUME_NAME GROUP_NAME", "ceph fs subvolumegroup info cephfs subvolgroup_2 { \"atime\": \"2022-10-05 18:00:39\", \"bytes_pcent\": \"51.85\", \"bytes_quota\": 20768679043, \"bytes_used\": 10768679044, \"created_at\": \"2022-10-05 18:00:39\", \"ctime\": \"2022-10-05 18:21:26\", \"data_pool\": \"cephfs.cephfs.data\", \"gid\": 0, \"mode\": 16877, \"mon_addrs\": [ \"60.221.178.236:1221\", \"205.64.75.112:1221\", \"20.209.241.242:1221\" ], \"mtime\": \"2022-10-05 18:01:25\", \"uid\": 0 }", "ceph fs subvolumegroup ls VOLUME_NAME", "ceph fs subvolumegroup ls cephfs", "ceph fs subvolumegroup getpath VOLUME_NAME GROUP_NAME", "ceph fs subvolumegroup getpath cephfs subgroup0", "ceph fs subvolumegroup snapshot ls VOLUME_NAME GROUP_NAME", "ceph fs subvolumegroup snapshot ls cephfs subgroup0", "ceph fs subvolumegroup snapshot rm VOLUME_NAME GROUP_NAME SNAP_NAME [--force]", "ceph fs subvolumegroup snapshot rm cephfs subgroup0 snap0 --force", "ceph fs subvolumegroup rm VOLUME_NAME GROUP_NAME [--force]", "ceph fs subvolumegroup rm cephfs subgroup0 --force", "ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ] [--namespace-isolated]", "ceph fs subvolume create cephfs sub0 --group_name subgroup0 --namespace-isolated", "ceph fs subvolume ls VOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume ls cephfs --group_name subgroup0", "ceph fs subvolume resize VOLUME_NAME SUBVOLUME_NAME NEW_SIZE [--group_name SUBVOLUME_GROUP_NAME ] [--no_shrink]", "ceph fs subvolume resize cephfs sub0 1024000000 --group_name subgroup0 --no_shrink", "ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME [--group_name _SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume getpath cephfs sub0 --group_name subgroup0", "ceph fs subvolume info VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume info cephfs sub0 --group_name subgroup0", "ceph fs subvolume info cephfs sub0 { \"atime\": \"2023-07-14 08:52:46\", \"bytes_pcent\": \"0.00\", \"bytes_quota\": 1024000000, \"bytes_used\": 0, \"created_at\": \"2023-07-14 08:52:46\", \"ctime\": \"2023-07-14 08:53:54\", \"data_pool\": \"cephfs.cephfs.data\", \"features\": [ \"snapshot-clone\", \"snapshot-autoprotect\", \"snapshot-retention\" ], \"flavor\": \"2\", \"gid\": 0, \"mode\": 16877, \"mon_addrs\": [ \"10.0.208.172:6789\", \"10.0.211.197:6789\", \"10.0.209.212:6789\" ], \"mtime\": \"2023-07-14 08:52:46\", \"path\": \"/volumes/_nogroup/sub0/834c5cbc-f5db-4481-80a3-aca92ff0e7f3\", \"pool_namespace\": \"\", \"state\": \"complete\", \"type\": \"subvolume\", \"uid\": 0 }", "ceph auth get CLIENT_NAME", "ceph auth get client.0 [client.0] key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = \"allow rw, allow rws path=/bar\" 1 caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\" 2", "ceph fs subvolume snapshot create VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME ]", "ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0", "CLIENT_NAME key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = allow rw, allow rws path= DIRECTORY_PATH caps mon = allow r caps osd = allow rw tag cephfs data= DIRECTORY_NAME", "[client.0] key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = \"allow rw, allow rws path=/bar\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\"", "ceph fs volume create VOLUME_NAME", "ceph fs volume create cephfs", "ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE ]", "ceph fs subvolumegroup create cephfs subgroup0", "ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ]", "ceph fs subvolume create cephfs sub0 --group_name subgroup0", "ceph fs subvolume snapshot create VOLUME_NAME _SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0", "ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0", "ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --group_name SUBVOLUME_GROUP_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --group_name subgroup0", "ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --target_group_name SUBVOLUME_GROUP_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --target_group_name subgroup1", "ceph fs clone status VOLUME_NAME CLONE_NAME [--group_name TARGET_GROUP_NAME ]", "ceph fs clone status cephfs clone0 --group_name subgroup1 { \"status\": { \"state\": \"complete\" } }", "ceph fs subvolume snapshot ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume snapshot ls cephfs sub0 --group_name subgroup0", "ceph fs subvolume snapshot info VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume snapshot info cephfs sub0 snap0 --group_name subgroup0", "{ \"created_at\": \"2022-05-09 06:18:47.330682\", \"data_pool\": \"cephfs_data\", \"has_pending_clones\": \"no\", \"size\": 0 }", "ceph fs subvolume rm VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ] [--force] [--retain-snapshots]", "ceph fs subvolume rm cephfs sub0 --group_name subgroup0 --retain-snapshots", "ceph fs subvolume snapshot clone VOLUME_NAME DELETED_SUBVOLUME RETAINED_SNAPSHOT NEW_SUBVOLUME --group_name SUBVOLUME_GROUP_NAME --target_group_name SUBVOLUME_TARGET_GROUP_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 sub1 --group_name subgroup0 --target_group_name subgroup0", "ceph fs subvolume snapshot rm VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME --force]", "ceph fs subvolume snapshot rm cephfs sub0 snap0 --group_name subgroup0 --force", "ceph fs subvolume metadata set VOLUME_NAME SUBVOLUME_NAME KEY_NAME VALUE [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata set cephfs sub0 test_meta cluster --group_name subgroup0", "ceph fs subvolume metadata set cephfs sub0 \"test meta\" cluster --group_name subgroup0", "ceph fs subvolume metadata set cephfs sub0 \"test_meta\" cluster2 --group_name subgroup0", "ceph fs subvolume metadata get VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata get cephfs sub0 test_meta --group_name subgroup0 cluster", "ceph fs subvolume metadata ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata ls cephfs sub0 { \"test_meta\": \"cluster\" }", "ceph fs subvolume metadata rm VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata rm cephfs sub0 test_meta --group_name subgroup0", "ceph fs subvolume metadata ls cephfs sub0 {}", "subscription-manager repos --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms", "dnf install cephfs-top", "ceph mgr module enable stats", "ceph auth get-or-create client.fstop mon 'allow r' mds 'allow r' osd 'allow r' mgr 'allow r' > /etc/ceph/ceph.client.fstop.keyring", "cephfs-top cephfs-top - Wed Nov 30 15:26:05 2022 All Filesystem Info Total Client(s): 4 - 3 FUSE, 1 kclient, 0 libcephfs COMMANDS: m - select a filesystem | s - sort menu | l - limit number of clients | r - reset to default | q - quit client_id mount_root chit(%) dlease(%) ofiles oicaps oinodes rtio(MB) raio(MB) rsp(MB/s) wtio(MB) waio(MB) wsp(MB/s) rlatavg(ms) rlatsd(ms) wlatavg(ms) wlatsd(ms) mlatavg(ms) mlatsd(ms) mount_point@host/addr Filesystem: cephfs1 - 2 client(s) 4500 / 100.0 100.0 0 751 0 0.0 0.0 0.0 578.13 0.03 0.0 N/A N/A N/A N/A N/A N/A N/A@example/192.168.1.4 4501 / 100.0 0.0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.41 0.0 /mnt/cephfs2@example/192.168.1.4 Filesystem: cephfs2 - 2 client(s) 4512 / 100.0 0.0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.0 /mnt/cephfs3@example/192.168.1.4 4518 / 100.0 0.0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.52 0.0 /mnt/cephfs4@example/192.168.1.4", "m Filesystems Press \"q\" to go back to home (all filesystem info) screen cephfs01 cephfs02 q cephfs-top - Thu Oct 20 07:29:35 2022 Total Client(s): 3 - 2 FUSE, 1 kclient, 0 libcephfs", "cephfs-top --selftest selftest ok", "ceph mgr module enable mds_autoscaler", "umount MOUNT_POINT", "umount /mnt/cephfs", "fusermount -u MOUNT_POINT", "fusermount -u /mnt/cephfs", "ceph fs authorize FILE_SYSTEM_NAME client. CLIENT_NAME / DIRECTORY CAPABILITY [/ DIRECTORY CAPABILITY ]", "[user@client ~]USD ceph fs authorize cephfs_a client.1 /temp rwp client.1 key: AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A== caps: [mds] allow r, allow rwp path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "setfattr -n ceph.dir.pin -v RANK DIRECTORY", "[user@client ~]USD setfattr -n ceph.dir.pin -v 2 /temp", "setfattr -n ceph.dir.pin -v -1 DIRECTORY", "[user@client ~]USD setfattr -n ceph.dir.pin -v -1 /home/ceph-user", "ceph osd pool create POOL_NAME", "ceph osd pool create cephfs_data_ssd pool 'cephfs_data_ssd' created", "ceph fs add_data_pool FS_NAME POOL_NAME", "ceph fs add_data_pool cephfs cephfs_data_ssd added data pool 6 to fsmap", "ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data cephfs_data_ssd]", "ceph fs rm_data_pool FS_NAME POOL_NAME", "ceph fs rm_data_pool cephfs cephfs_data_ssd removed data pool 6 from fsmap", "ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs.cephfs.data]", "ceph fs set FS_NAME down true", "ceph fs set cephfs down true", "ceph fs set FS_NAME down false", "ceph fs set cephfs down false", "ceph fs fail FS_NAME", "ceph fs fail cephfs", "ceph fs set FS_NAME joinable true", "ceph fs set cephfs joinable true cephfs marked joinable; MDS may join as newly active.", "ceph fs set FS_NAME down true", "ceph fs set cephfs down true cephfs marked down.", "ceph fs status", "ceph fs status cephfs - 0 clients ====== +-------------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+------------+-------+-------+ |cephfs.cephfs.meta | metadata | 31.5M | 52.6G| |cephfs.cephfs.data | data | 0 | 52.6G| +-----------------+----------+-------+---------+ STANDBY MDS cephfs.ceph-host01 cephfs.ceph-host02 cephfs.ceph-host03", "ceph fs rm FS_NAME --yes-i-really-mean-it", "ceph fs rm cephfs --yes-i-really-mean-it", "ceph fs ls", "ceph mds fail MDS_NAME", "ceph mds fail example01", "fs required_client_features FILE_SYSTEM_NAME add FEATURE_NAME fs required_client_features FILE_SYSTEM_NAME rm FEATURE_NAME", "ceph tell DAEMON_NAME client ls", "ceph tell mds.0 client ls [ { \"id\": 4305, \"num_leases\": 0, \"num_caps\": 3, \"state\": \"open\", \"replay_requests\": 0, \"completed_requests\": 0, \"reconnecting\": false, \"inst\": \"client.4305 172.21.9.34:0/422650892\", \"client_metadata\": { \"ceph_sha1\": \"79f0367338897c8c6d9805eb8c9ad24af0dcd9c7\", \"ceph_version\": \"ceph version 16.2.8-65.el8cp (79f0367338897c8c6d9805eb8c9ad24af0dcd9c7)\", \"entity_id\": \"0\", \"hostname\": \"senta04\", \"mount_point\": \"/tmp/tmpcMpF1b/mnt.0\", \"pid\": \"29377\", \"root\": \"/\" } } ]", "ceph tell DAEMON_NAME client evict id= ID_NUMBER", "ceph tell mds.0 client evict id=4305", "ceph osd blocklist ls listed 1 entries 127.0.0.1:0/3710147553 2022-05-09 11:32:24.716146", "ceph osd blocklist rm CLIENT_NAME_OR_IP_ADDR", "ceph osd blocklist rm 127.0.0.1:0/3710147553 un-blocklisting 127.0.0.1:0/3710147553", "recover_session=clean", "client_reconnect_stale=true", "getfattr -n ceph.quota.max_bytes DIRECTORY", "getfattr -n ceph.quota.max_bytes /mnt/cephfs/ getfattr: Removing leading '/' from absolute path names file: mnt/cephfs/ ceph.quota.max_bytes=\"100000000\"", "getfattr -n ceph.quota.max_files DIRECTORY", "getfattr -n ceph.quota.max_files /mnt/cephfs/ getfattr: Removing leading '/' from absolute path names file: mnt/cephfs/ ceph.quota.max_files=\"10000\"", "setfattr -n ceph.quota.max_bytes -v LIMIT_VALUE DIRECTORY", "setfattr -n ceph.quota.max_bytes -v 2T /cephfs/", "setfattr -n ceph.quota.max_files -v LIMIT_VALUE DIRECTORY", "setfattr -n ceph.quota.max_files -v 10000 /cephfs/", "setfattr -n ceph.quota.max_bytes -v 0 DIRECTORY", "setfattr -n ceph.quota.max_bytes -v 0 /mnt/cephfs/", "setfattr -n ceph.quota.max_files -v 0 DIRECTORY", "setfattr -n ceph.quota.max_files -v 0 /mnt/cephfs/", "setfattr -n ceph. TYPE .layout. FIELD -v VALUE PATH", "setfattr -n ceph.file.layout.stripe_unit -v 1048576 test", "getfattr -n ceph. TYPE .layout PATH", "getfattr -n ceph.dir.layout /home/test ceph.dir.layout=\"stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data\"", "getfattr -n ceph. TYPE .layout. FIELD _PATH", "getfattr -n ceph.file.layout.pool test ceph.file.layout.pool=\"cephfs_data\"", "setfattr -x ceph.dir.layout DIRECTORY_PATH", "[user@client ~]USD setfattr -x ceph.dir.layout /home/cephfs", "setfattr -x ceph.dir.layout.pool_namespace DIRECTORY_PATH", "[user@client ~]USD setfattr -x ceph.dir.layout.pool_namespace /home/cephfs", "cephadm shell", "ceph fs set FILE_SYSTEM_NAME allow_new_snaps true", "ceph fs set cephfs01 allow_new_snaps true", "mkdir NEW_DIRECTORY_PATH", "mkdir /.snap/new-snaps", "rmdir NEW_DIRECTORY_PATH", "rmdir /.snap/new-snaps", "cephadm shell", "ceph mgr module enable snap_schedule", "cephadm shell", "ceph fs snap-schedule add FILE_SYSTEM_VOLUME_PATH REPEAT_INTERVAL [ START_TIME ] --fs CEPH_FILE_SYSTEM_NAME", "ceph fs snap-schedule add /cephfs_kernelf739cwtus2/pmo9axbwsi 1h 2022-06-27T21:50:00 --fs mycephfs", "ceph fs snap-schedule retention add FILE_SYSTEM_VOLUME_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT", "ceph fs snap-schedule retention add /cephfs h 14 1 ceph fs snap-schedule retention add /cephfs d 4 2 ceph fs snap-schedule retention add /cephfs 14h4w 3", "ceph fs snap-schedule list FILE_SYSTEM_VOLUME_PATH [--format=plain|json] [--recursive=true]", "ceph fs snap-schedule list /cephfs --recursive=true", "ceph fs snap-schedule status FILE_SYSTEM_VOLUME_PATH [--format=plain|json]", "ceph fs snap-schedule status /cephfs --format=json", "ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME SUBVOLUME_GROUP_NAME", "ceph fs subvolume getpath cephfs subvol_1 subvolgroup_1", "ceph fs snap-schedule add SUBVOLUME_DIR_PATH SNAP_SCHEDULE [ START_TIME ] --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule add /cephfs_kernelf739cwtus2/pmo9axbwsi 1h 2022-06-27T21:50:00 --fs cephfs --subvol subvol_1 Schedule set for path /..", "ceph fs snap-schedule add /.. SNAP_SCHEDULE [ START_TIME] --fs CEPH_FILE_SYSTEM_NAME --subvol _SUBVOLUME_NAME", "ceph fs snap-schedule add - 2M --subvol sv_non_def_1", "ceph fs snap-schedule add /.. SNAP_SCHEDULE [ START_TIME] --fs CEPH_FILE_SYSTEM_NAME --subvol _SUBVOLUME_NAME --group NON_DEFAULT_SUBVOLGROUP_NAME", "ceph fs snap-schedule add - 2M --fs cephfs --subvol sv_non_def_1 --group svg1", "ceph fs snap-schedule retention add SUBVOLUME_DIR_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT", "ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. h 14 1 ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. d 4 2 ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 14h4w 3 Retention added to path /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..", "ceph fs snap-schedule retention add / [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD_COUNT --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule retention add / 5h --fs cephfs --subvol sv_sched Retention added to path /volumes/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule retention add / [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD_COUNT --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON_DEFAULT_SUBVOLGROUP_NAME", "ceph fs snap-schedule retention add / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention added to path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a54j0dda7f16/..", "ceph fs snap-schedule list SUBVOLUME_VOLUME_PATH [--format=plain|json] [--recursive=true]", "ceph fs snap-schedule list / --recursive=true /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 4h", "ceph fs snap-schedule status SUBVOLUME_DIR_PATH [--format=plain|json]", "ceph fs snap-schedule status /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. --format=json {\"fs\": \"cephfs\", \"subvol\": \"subvol_1\", \"path\": \"/volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..\", \"rel_path\": \"/..\", \"schedule\": \"4h\", \"retention\": {\"h\": 14}, \"start\": \"2022-05-16T14:00:00\", \"created\": \"2023-03-20T08:47:18\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}", "ceph fs snap-schedule status --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule status --fs cephfs --subvol sv_sched {\"fs\": \"cephfs\", \"subvol\": \"sv_sched\", \"group\": \"subvolgroup_cg\", \"path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"rel_path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"schedule\": \"1h\", \"retention\": {\"h\": 5}, \"start\": \"2024-05-21T00:00:00\", \"created\": \"2024-05-21T09:18:58\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}", "ceph fs snap-schedule status --fs _CEPH_FILE_SYSTEM_NAME_ --subvol _SUBVOLUME_NAME_ --group _NON-DEFAULT_SUBVOLGROUP_NAME_", "ceph fs snap-schedule status --fs cephfs --subvol sv_sched --group subvolgroup_cg {\"fs\": \"cephfs\", \"subvol\": \"sv_sched\", \"group\": \"subvolgroup_cg\", \"path\": \"/volumes/subvolgroup_cg/sv_sched/e564329a-kj87-4763-gh0y-b56c8sev7t23/..\", \"rel_path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"schedule\": \"1h\", \"retention\": {\"h\": 5}, \"start\": \"2024-05-21T00:00:00\", \"created\": \"2024-05-21T09:18:58\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}", "ceph fs snap-schedule activate FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ]", "ceph fs snap-schedule activate /cephfs", "ceph fs snap-schedule activate SUBVOL_DIR_PATH [ REPEAT_INTERVAL ]", "ceph fs snap-schedule activate /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..", "ceph fs snap-schedule activate /.. REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule activate / --fs cephfs --subvol sv_sched Schedule activated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule activate /.. [ REPEAT_INTERVAL ] --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME", "ceph fs snap-schedule activate / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule activated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule deactivate FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ]", "ceph fs snap-schedule deactivate /cephfs 1d", "ceph fs snap-schedule deactivate SUBVOL_DIR_PATH [ REPEAT_INTERVAL ]", "ceph fs snap-schedule deactivate /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 1d", "ceph fs snap-schedule deactivate / REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule deactivate / --fs cephfs --subvol sv_sched Schedule deactivated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule deactivate / REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME", "ceph fs snap-schedule deactivate / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule deactivated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule remove FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ] [ START_TIME ]", "ceph fs snap-schedule remove /cephfs 4h 2022-05-16T14:00:00", "ceph fs snap-schedule remove FILE_SYSTEM_VOLUME_PATH", "ceph fs snap-schedule remove /cephfs", "ceph fs snap-schedule remove SUBVOL_DIR_PATH [ REPEAT_INTERVAL ] [ START_TIME ]", "ceph fs snap-schedule remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 4h 2022-05-16T14:00:00", "ceph fs snap-schedule remove / --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule remove / --fs cephfs --subvol sv_sched Schedule removed for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule remove / --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME", "ceph fs snap-schedule remove / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule removed for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule retention remove FILE_SYSTEM_VOLUME_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT", "ceph fs snap-schedule retention remove /cephfs h 4 1 ceph fs snap-schedule retention remove /cephfs 14d4w 2", "ceph fs snap-schedule retention remove SUBVOL_DIR_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT", "ceph fs snap-schedule retention remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. h 4 1 ceph fs snap-schedule retention remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 14d4w 2", "ceph fs snap-schedule retention remove / TIME_PERIOD_PAIR TIME_PERIOD COUNT --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule retention remove / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention removed from path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule retention remove / TIME_PERIOD_PAIR TIME_PERIOD COUNT --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME", "ceph fs snap-schedule retention remove / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention removed from path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "cephadm shell", "ceph orch apply cephfs-mirror [\" NODE_NAME \"]", "ceph orch apply cephfs-mirror \"node1.example.com\" Scheduled cephfs-mirror update", "ceph orch apply cephfs-mirror --placement=\" PLACEMENT_SPECIFICATION \"", "ceph orch apply cephfs-mirror --placement=\"3 host1 host2 host3\" Scheduled cephfs-mirror update", "Error EINVAL: name component must include only a-z, 0-9, and -", "ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME / rwps", "ceph fs authorize cephfs client.mirror_remote / rwps [client.mirror_remote] key = AQCjZ5Jg739AAxAAxduIKoTZbiFJ0lgose8luQ==", "ceph mgr module enable mirroring", "ceph fs snapshot mirror enable FILE_SYSTEM_NAME", "ceph fs snapshot mirror enable cephfs", "ceph fs snapshot mirror disable FILE_SYSTEM_NAME", "ceph fs snapshot mirror disable cephfs", "ceph mgr module enable mirroring", "ceph fs snapshot mirror peer_bootstrap create FILE_SYSTEM_NAME CLIENT_NAME SITE_NAME", "ceph fs snapshot mirror peer_bootstrap create cephfs client.mirror_remote remote-site {\"token\": \"eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==\"}", "ceph fs snapshot mirror peer_bootstrap import FILE_SYSTEM_NAME TOKEN", "ceph fs snapshot mirror peer_bootstrap import cephfs eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==", "ceph fs snapshot mirror peer_list FILE_SYSTEM_NAME", "ceph fs snapshot mirror peer_list cephfs {\"e5ecb883-097d-492d-b026-a585d1d7da79\": {\"client_name\": \"client.mirror_remote\", \"site_name\": \"remote-site\", \"fs_name\": \"cephfs\", \"mon_host\": \"[v2:10.0.211.54:3300/0,v1:10.0.211.54:6789/0] [v2:10.0.210.56:3300/0,v1:10.0.210.56:6789/0] [v2:10.0.210.65:3300/0,v1:10.0.210.65:6789/0]\"}}", "ceph fs snapshot mirror peer_remove FILE_SYSTEM_NAME PEER_UUID", "ceph fs snapshot mirror peer_remove cephfs e5ecb883-097d-492d-b026-a585d1d7da79", "ceph fs snapshot mirror add FILE_SYSTEM_NAME PATH", "ceph fs snapshot mirror add cephfs /volumes/_nogroup/subvol_1", "ceph fs snapshot mirror remove FILE_SYSTEM_NAME PATH", "ceph fs snapshot mirror remove cephfs /home/user1", "cephadm shell", "ceph fs snapshot mirror daemon status", "ceph fs snapshot mirror daemon status [ { \"daemon_id\": 15594, \"filesystems\": [ { \"filesystem_id\": 1, \"name\": \"cephfs\", \"directory_count\": 1, \"peers\": [ { \"uuid\": \"e5ecb883-097d-492d-b026-a585d1d7da79\", \"remote\": { \"client_name\": \"client.mirror_remote\", \"cluster_name\": \"remote-site\", \"fs_name\": \"cephfs\" }, \"stats\": { \"failure_count\": 1, \"recovery_count\": 0 } } ] } ] } ]", "ceph --admin-daemon PATH_TO_THE_ASOK_FILE help", "ceph --admin-daemon /var/run/ceph/1011435c-9e30-4db6-b720-5bf482006e0e/ceph-client.cephfs-mirror.node1.bndvox.asok help { \"fs mirror peer status cephfs@11 1011435c-9e30-4db6-b720-5bf482006e0e\": \"get peer mirror status\", \"fs mirror status cephfs@11\": \"get filesystem mirror status\", }", "ceph --admin-daemon PATH_TO_THE_ASOK_FILE fs mirror status FILE_SYSTEM_NAME @_FILE_SYSTEM_ID", "ceph --admin-daemon /var/run/ceph/1011435c-9e30-4db6-b720-5bf482006e0e/ceph-client.cephfs-mirror.node1.bndvox.asok fs mirror status cephfs@11 { \"rados_inst\": \"192.168.0.5:0/1476644347\", \"peers\": { \"1011435c-9e30-4db6-b720-5bf482006e0e\": { 1 \"remote\": { \"client_name\": \"client.mirror_remote\", \"cluster_name\": \"remote-site\", \"fs_name\": \"cephfs\" } } }, \"snap_dirs\": { \"dir_count\": 1 } }", "ceph --admin-daemon PATH_TO_ADMIN_SOCKET fs mirror status FILE_SYSTEM_NAME @ FILE_SYSTEM_ID PEER_UUID", "ceph --admin-daemon /var/run/ceph/cephfs-mirror.asok fs mirror peer status cephfs@11 1011435c-9e30-4db6-b720-5bf482006e0e { \"/home/user1\": { \"state\": \"idle\", 1 \"last_synced_snap\": { \"id\": 120, \"name\": \"snap1\", \"sync_duration\": 0.079997898999999997, \"sync_time_stamp\": \"274900.558797s\" }, \"snaps_synced\": 2, 2 \"snaps_deleted\": 0, 3 \"snaps_renamed\": 0 } }", "ceph fs snapshot mirror dirmap FILE_SYSTEM_NAME PATH", "ceph fs snapshot mirror dirmap cephfs /volumes/_nogroup/subvol_1 { \"instance_id\": \"25184\", 1 \"last_shuffled\": 1661162007.012663, \"state\": \"mapped\" }", "ceph fs snapshot mirror dirmap cephfs /volumes/_nogroup/subvol_1 { \"reason\": \"no mirror daemons running\", \"state\": \"stalled\" 1 }", "ceph --admin-daemon ASOK_FILE_NAME counter dump", "ceph --admin-daemon ceph-client.cephfs-mirror.ceph1-hk-n-0mfqao-node7.pnbrlu.2.93909288073464.asok counter dump [ { \"key\": \"cephfs_mirror\", \"value\": [ { \"labels\": {}, \"counters\": { \"mirrored_filesystems\": 1, \"mirror_enable_failures\": 0 } } ] }, { \"key\": \"cephfs_mirror_mirrored_filesystems\", \"value\": [ { \"labels\": { \"filesystem\": \"cephfs\" }, \"counters\": { \"mirroring_peers\": 1, \"directory_count\": 1 } } ] }, { \"key\": \"cephfs_mirror_peers\", \"value\": [ { \"labels\": { \"peer_cluster_filesystem\": \"cephfs\", \"peer_cluster_name\": \"remote_site\", \"source_filesystem\": \"cephfs\", \"source_fscid\": \"1\" }, \"counters\": { \"snaps_synced\": 1, \"snaps_deleted\": 0, \"snaps_renamed\": 0, \"sync_failures\": 0, \"avg_sync_time\": { \"avgcount\": 1, \"sum\": 4.216959457, \"avgtime\": 4.216959457 }, \"sync_bytes\": 132 } } ] } ]" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html-single/file_system_guide/index
Chapter 17. Kernel Process Tapset
Chapter 17. Kernel Process Tapset This family of probe points is used to probe process-related activities. It contains the following probe points:
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/kprocess-dot-stp
Chapter 1. Introduction to Ceph block devices
Chapter 1. Introduction to Ceph block devices A block is a set length of bytes in a sequence, for example, a 512-byte block of data. Combining many blocks together into a single file can be used as a storage device that you can read from and write to. Block-based storage interfaces are the most common way to store data with rotating media such as: Hard drives CD/DVD discs Floppy disks Traditional 9-track tapes The ubiquity of block device interfaces makes a virtual block device an ideal candidate for interacting with a mass data storage system like Red Hat Ceph Storage. Ceph block devices are thin-provisioned, resizable and store data striped over multiple Object Storage Devices (OSD) in a Ceph storage cluster. Ceph block devices are also known as Reliable Autonomic Distributed Object Store (RADOS) Block Devices (RBDs). Ceph block devices leverage RADOS capabilities such as: Snapshots Replication Data consistency Ceph block devices interact with OSDs by using the librbd library. Ceph block devices deliver high performance with infinite scalability to Kernel Virtual Machines (KVMs), such as Quick Emulator (QEMU), and cloud-based computing systems, like OpenStack, that rely on the libvirt and QEMU utilities to integrate with Ceph block devices. You can use the same storage cluster to operate the Ceph Object Gateway and Ceph block devices simultaneously. Important To use Ceph block devices, requires you to have access to a running Ceph storage cluster. For details on installing a Red Hat Ceph Storage cluster, see the Red Hat Ceph Storage Installation Guide .
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/block_device_guide/introduction-to-ceph-block-devices_block
18.14. Advanced Access Control: Using Macro ACIs
18.14. Advanced Access Control: Using Macro ACIs Macro ACIs improve the flexibility. For example, you can add a subtree and automatically get the same tailored access controls as for other subtrees without the need to add any ACI. As a side effect, the number of ACIs is smaller, however, Macro ACI processing is more expensive than a regular ACI. Macros are placeholders that are used to represent a DN, or a portion of a DN, in an ACI. You can use a macro to represent a DN in the target portion of the ACI or in the bind rule portion, or both. In practice, when Directory Server gets an incoming LDAP operation, the ACI macros are matched against the resource targeted by the LDAP operation. If there is a match, the macro is replaced by the value of the DN of the targeted resource. Directory Server then evaluates the ACI normally. 18.14.1. Macro ACI Example Figure 18.1, "Example Directory Tree for Macro ACIs" shows a directory tree which uses macro ACIs to effectively reduce the overall number of ACIs. This illustration uses repeating pattern of subdomains with the same tree structure ( ou=groups , ou=people ). This pattern is also repeated across the tree because the Example Corp. directory tree stores the suffixes dc=hostedCompany2,dc=example,dc=com and dc=hostedCompany3,dc=example,dc=com . The ACIs that apply in the directory tree also have a repeating pattern. For example, the following ACI is located on the dc=hostedCompany1,dc=example,dc=com node: This ACI grants read and search rights to the DomainAdmins group to any entry in the dc=hostedCompany1,dc=example,dc=com tree. Figure 18.1. Example Directory Tree for Macro ACIs The following ACI is located on the dc=hostedCompany1,dc=example,dc=com node: The following ACI is located on the dc=subdomain1,dc=hostedCompany1,dc=example,dc=com node: The following ACI is located on the dc=hostedCompany2,dc=example,dc=com node: The following ACI is located on the dc=subdomain1,dc=hostedCompany2,dc=example,dc=com node: In the four ACIs shown above, the only differentiator is the DN specified in the groupdn keyword. By using a macro for the DN, it is possible to replace these ACIs by a single ACI at the root of the tree, on the dc=example,dc=com node. This ACI reads as follows: The target keyword, which was not previously used, is utilized in the new ACI. In this example, the number of ACIs is reduced from four to one. The real benefit is a factor of how many repeating patterns you have down and across your directory tree. 18.14.2. Macro ACI Syntax Macro ACIs include the following types of expressions to replace a DN or part of a DN: (USDdn) [USDdn] (USDattr. attrName ), where attrName represents an attribute contained in the target entry In this section, the ACI keywords used to provide bind credentials, such as userdn , roledn , groupdn , and userattr , are collectively called the subject , as opposed to the target , of the ACI. Macro ACIs can be used in the target part or the subject part of an ACI. Table 18.5, "Macros in ACI Keywords" shows in what parts of the ACI you can use DN macros: Table 18.5. Macros in ACI Keywords Macro ACI Keyword (USDdn) target, targetfilter, userdn, roledn, groupdn, userattr [USDdn] targetfilter, userdn, roledn, groupdn, userattr (USDattr. attrName ) userdn, roledn, groupdn, userattr The following restrictions apply: If you use (USDdn) in targetfilter , userdn , roledn , groupdn , userattr , you must define a target that contains (USDdn) . If you use [USDdn] in targetfilter , userdn , roledn , groupdn , userattr , you must define a target that contains (USDdn) . Note When using any macro, you always need a target definition that contains the (USDdn) macro. You can combine the (USDdn) macro and the (USDattr. attrName ) macro. 18.14.2.1. Macro Matching for (USDdn) The (USDdn) macro is replaced by the matching part of the resource targeted in an LDAP request. For example, you have an LDAP request targeted at the cn=all,ou=groups,dc=subdomain1,dc=hostedCompany1,dc=example,dc=com entry and an ACI that defines the target as follows: The (USDdn) macro matches with dc=subdomain1,dc=hostedCompany1 . When the subject of the ACI also uses (USDdn) , the substring that matches the target is used to expand the subject. For example: In this case, if the string matching (USDdn) in the target is dc=subdomain1,dc=hostedCompany1 , then the same string is used in the subject. The ACI is then expanded as follows: Once the macro has been expanded, Directory Server evaluates the ACI following the normal process to determine whether access is granted. 18.14.2.2. Macro Matching for [USDdn] The matching mechanism for [USDdn] is slightly different than for (USDdn) . The DN of the targeted resource is examined several times, each time dropping the left-most RDN component, until a match is found. For example, you have an LDAP request targeted at the cn=all,ou=groups,dc=subdomain1,dc=hostedCompany1,dc=example,dc=com subtree and the following ACI: The steps for expanding this ACI are as follows: (USDdn) in the target matches dc=subdomain1,dc=hostedCompany1 . [USDdn] in the subject is replaces with dc=subdomain1,dc=hostedCompany1 . The result is groupdn="ldap:///cn=DomainAdmins,ou=Groups,dc=subdomain1,dc=hostedCompany1,dc=example,dc=com" . If the bind DN is a member of that group, the matching process stops, and the ACI is evaluated. If it does not match, the process continues. [USDdn] in the subject is replaced with dc=hostedCompany1 . The result is groupdn="ldap:///cn=DomainAdmins,ou=Groups,dc=hostedCompany1,dc=example,dc=com" . In this case, if the bind DN is not a member of that group, the ACI is not evaluated. If it is a member, the ACI is evaluated. The advantage of the [USDdn] macro is that it provides a flexible way of granting access to domain-level administrators to all the subdomains in the directory tree. Therefore, it is useful for expressing a hierarchical relationship between domains. For example, consider the following ACI: It grants access to the members of cn=DomainAdmins,ou=Groups,dc=hostedCompany1,dc=example,dc=com to all of the subdomains under dc=hostedCompany1 , so an administrator belonging to that group could access a subtree like ou=people,dc=subdomain1.1,dc=subdomain1 . However, at the same time, members of cn=DomainAdmins,ou=Groups,dc=subdomain1.1 would be denied access to the ou=people,dc=hostedCompany1 and ou=people,dc=subdomain1,dc=hostedCompany1 nodes. 18.14.2.3. Macro Matching for (USDattr.attrName) The (USDattr. attrName ) macro is always used in the subject part of a DN. For example, define the following roledn : Now, assume the server receives an LDAP operation targeted at the following entry: In order to evaluate the roledn part of the ACI, the server looks at the ou attribute stored in the targeted entry and uses the value of this attribute to expand the macro. Therefore, in the example, the roledn is expanded as follows: The Directory Server then evaluates the ACI according to the normal ACI evaluation algorithm. When an attribute is multi-valued, each value is used to expand the macro, and the first one that provides a successful match is used. For example: In this case, when the Directory Server evaluates the ACI, it performs a logical OR on the following expanded expressions:
[ "aci: (targetattr=\"*\")(targetfilter=(objectClass=nsManagedDomain)) (version 3.0; acl \"Domain access\"; allow (read,search) groupdn=\"ldap:///cn=DomainAdmins,ou=Groups,dc=hostedCompany1,dc=example,dc=com\";)", "aci: (targetattr=\"*\")(targetfilter=(objectClass=nsManagedDomain)) (version 3.0; acl \"Domain access\"; allow (read,search) groupdn=\"ldap:///cn=DomainAdmins,ou=Groups,dc=hostedCompany1,dc=example,dc=com\";)", "aci: (targetattr=\"*\")(targetfilter=(objectClass=nsManagedDomain)) (version 3.0; acl \"Domain access\"; allow (read,search) groupdn=\"ldap:///cn=DomainAdmins,ou=Groups,dc=subdomain1,dc=hostedCompany1,dc=example,dc=com\";)", "aci: (targetattr=\"*\")(targetfilter=(objectClass=nsManagedDomain)) (version 3.0; acl \"Domain access\"; allow (read,search) groupdn=\"ldap:///cn=DomainAdmins,ou=Groups,dc=hostedCompany2,dc=example,dc=com\";)", "aci: (targetattr=\"*\")(targetfilter=(objectClass=nsManagedDomain)) (version 3.0; acl \"Domain access\"; allow (read,search) groupdn=\"ldap:///cn=DomainAdmins,ou=Groups,dc=subdomain1,dc=hostedCompany2,dc=example,dc=com\";)", "aci: (target=\"ldap:///ou=Groups,(USDdn),dc=example,dc=com\") (targetattr=\"*\")(targetfilter=(objectClass=nsManagedDomain)) (version 3.0; acl \"Domain access\"; allow (read,search) groupdn=\"ldap:///cn=DomainAdmins,ou=Groups,[USDdn],dc=example,dc=com\";)", "(target=\"ldap:///ou=Groups,(USDdn),dc=example,dc=com\")", "aci: (target=\"ldap:///ou=*,(USDdn),dc=example,dc=com\") (targetattr = \"*\") (version 3.0; acl \"Domain access\"; allow (read,search) groupdn=\"ldap:///cn=DomainAdmins,ou=Groups,(USDdn),dc=example,dc=com\";)", "aci: (target=\"ldap:///ou=Groups,dc=subdomain1,dc=hostedCompany1, dc=example,dc=com\") (targetattr = \"*\") (version 3.0; acl \"Domain access\"; allow (read,search) groupdn=\"ldap:///cn=DomainAdmins,ou=Groups, dc=subdomain1,dc=hostedCompany1,dc=example,dc=com\";)", "aci: (target=\"ldap:///ou=Groups,(USDdn),dc=example,dc=com\") (targetattr = \"*\") (version 3.0; acl \"Domain access\"; allow (read,search) groupdn=\"ldap:///cn=DomainAdmins,ou=Groups,[USDdn],dc=example,dc=com\";)", "aci: (target=\"ldap:///ou=*, (USDdn),dc=example,dc=com\") (targetattr=\"*\")(targetfilter=(objectClass=nsManagedDomain)) (version 3.0; acl \"Domain access\"; allow (read,search) groupdn=\"ldap:///cn=DomainAdmins,ou=Groups,[USDdn],dc=example,dc=com\";)", "roledn = \"ldap:///cn=DomainAdmins,(USDattr.ou)\"", "dn: cn=Jane Doe,ou=People,dc=HostedCompany1,dc=example,dc=com cn: Jane Doe sn: Doe ou: Engineering,dc=HostedCompany1,dc=example,dc=com", "roledn = \"ldap:///cn=DomainAdmins,ou=Engineering,dc=HostedCompany1,dc=example,dc=com\"", "dn: cn=Jane Doe,ou=People,dc=HostedCompany1,dc=example,dc=com cn: Jane Doe sn: Doe ou: Engineering,dc=HostedCompany1,dc=example,dc=com ou: People,dc=HostedCompany1,dc=example,dc=com", "roledn = \"ldap:///cn=DomainAdmins,ou=Engineering,dc=HostedCompany1,dc=example,dc=com\" roledn = \"ldap:///cn=DomainAdmins,ou=People,dc=HostedCompany1,dc=example,dc=com\"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Managing_Access_Control-Advanced_Access_Control_Using_Macro_ACIs
Chapter 4. Installing a cluster on vSphere using the Assisted Installer
Chapter 4. Installing a cluster on vSphere using the Assisted Installer You can install OpenShift Container Platform on on-premise hardware or on-premise VMs by using the Assisted Installer. Installing OpenShift Container Platform by using the Assisted Installer supports x86_64 , AArch64 , ppc64le , and s390x CPU architectures. The Assisted Installer is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console. 4.1. Additional resources Installing OpenShift Container Platform with the Assisted Installer
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_vmware_vsphere/installing-vsphere-assisted-installer
18.10. Keyboard Configuration
18.10. Keyboard Configuration To add multiple keyboard layouts to your system, select Keyboard from the Installation Summary screen. Upon saving, the keyboard layouts are immediately available in the installation program and you can switch between them by using the keyboard icon located at all times in the upper right corner of the screen. Initially, only the language you selected in the welcome screen is listed as the keyboard layout in the left pane. You can either replace the initial layout or add more layouts. However, if your language does not use ASCII characters, you might need to add a keyboard layout that does, to be able to properly set a password for an encrypted disk partition or the root user, among other things. Figure 18.6. Keyboard Configuration To add an additional layout, click the + button, select it from the list, and click Add . To delete a layout, select it and click the - button. Use the arrow buttons to arrange the layouts in order of preference. For a visual preview of the keyboard layout, select it and click the keyboard button. To test a layout, use the mouse to click inside the text box on the right. Type some text to confirm that your selection functions correctly. To test additional layouts, you can click the language selector at the top on the screen to switch them. However, it is recommended to set up a keyboard combination for switching layout. Click the Options button at the right to open the Layout Switching Options dialog and choose a combination from the list by selecting its check box. The combination will then be displayed above the Options button. This combination applies both during the installation and on the installed system, so you must configure a combination here in order to use one after installation. You can also select more than one combination to switch between layouts. Important If you use a layout that cannot accept Latin characters, such as Russian , Red Hat recommends additionally adding the English (United States) layout and configuring a keyboard combination to switch between the two layouts. If you only select a layout without Latin characters, you might be unable to enter a valid root password and user credentials later in the installation process. This can prevent you from completing the installation. Once you have made your selection, click Done to return to the Installation Summary screen. Note To change your keyboard configuration after you have completed the installation, visit the Keyboard section of the Settings dialogue window.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-keyboard-configuration-s390
A.4. Investigating Smart Card Authentication Failures
A.4. Investigating Smart Card Authentication Failures Open the /etc/sssd/sssd.conf file, and set the debug_level option to 2 . Review the sssd_pam.log and sssd_ EXAMPLE.COM .log files. If you see timeout error message in the files, see Section B.4.4, "Smart Card Authentication Fails with Timeout Error Messages" .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/trouble-gen-sc
Managing hosts
Managing hosts Red Hat Satellite 6.16 Register hosts to Satellite, configure host groups and collections, set up remote execution, manage packages on hosts, monitor hosts, and more Red Hat Satellite Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/index
Chapter 16. KIE Server capabilities and extensions
Chapter 16. KIE Server capabilities and extensions The capabilities in KIE Server are determined by plug-in extensions that you can enable, disable, or further extend to meet your business needs. KIE Server supports the following default capabilities and extensions: Table 16.1. KIE Server capabilities and extensions Capability name Extension name Description KieServer KieServer Provides the core capabilities of KIE Server, such as creating and disposing KIE containers on your server instance BRM Drools Provides the Business Rule Management (BRM) capabilities, such as inserting facts and executing business rules BRP OptaPlanner Provides the Business Resource Planning (BRP) capabilities, such as implementing solvers DMN DMN Provides the Decision Model and Notation (DMN) capabilities, such as managing DMN data types and executing DMN models Swagger Swagger Provides the Swagger web-interface capabilities for interacting with the KIE Server REST API To view the supported extensions of a running KIE Server instance, send a GET request to the following REST API endpoint and review the XML or JSON server response: Base URL for GET request for KIE Server information Example JSON response with KIE Server information { "type": "SUCCESS", "msg": "Kie Server info", "result": { "kie-server-info": { "id": "test-kie-server", "version": "7.67.0.20190818-050814", "name": "test-kie-server", "location": "http://localhost:8080/kie-server/services/rest/server", "capabilities": [ "KieServer", "BRM", "BRP", "DMN", "Swagger" ], "messages": [ { "severity": "INFO", "timestamp": { "java.util.Date": 1566169865791 }, "content": [ "Server KieServerInfo{serverId='test-kie-server', version='7.67.0.20190818-050814', name='test-kie-server', location='http:/localhost:8080/kie-server/services/rest/server', capabilities=[KieServer, BRM, BRP, DMN, Swagger]', messages=null', mode=DEVELOPMENT}started successfully at Sun Aug 18 23:11:05 UTC 2019" ] } ], "mode": "DEVELOPMENT" } } } To enable or disable KIE Server extensions, configure the related *.server.ext.disabled KIE Server system property. For example, to disable the BRM capability, set the system property org.drools.server.ext.disabled=true . For all KIE Server system properties, see Chapter 15, KIE Server system properties . By default, KIE Server extensions are exposed through REST or JMS data transports and use predefined client APIs. You can extend existing KIE Server capabilities with additional REST endpoints, extend supported transport methods beyond REST or JMS, or extend functionality in the KIE Server client. This flexibility in KIE Server functionality enables you to adapt your KIE Server instances to your business needs, instead of adapting your business needs to the default KIE Server capabilities. Important If you extend KIE Server functionality, Red Hat does not support the custom code that you use as part of your custom implementations and extensions. 16.1. Extending an existing KIE Server capability with a custom REST API endpoint The KIE Server REST API enables you to interact with your KIE containers and business assets (such as business rules, processes, and solvers) in Red Hat Decision Manager without using the Business Central user interface. The available REST endpoints are determined by the capabilities enabled in your KIE Server system properties (for example, org.drools.server.ext.disabled=false for the BRM capability). You can extend an existing KIE Server capability with a custom REST API endpoint to further adapt the KIE Server REST API to your business needs. As an example, this procedure extends the Drools KIE Server extension (for the BRM capability) with the following custom REST API endpoint: Example custom REST API endpoint This example custom endpoint accepts a list of facts to be inserted into the working memory of the decision engine, automatically executes all rules, and retrieves all objects from the KIE session in the specified KIE container. Procedure Create an empty Maven project and define the following packaging type and dependencies in the pom.xml file for the project: Example pom.xml file in the sample project <packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-internal</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-drools</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-rest-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.25</version> </dependency> </dependencies> Implement the org.kie.server.services.api.KieServerApplicationComponentsService interface in a Java class in your project, as shown in the following example: Sample implementation of the KieServerApplicationComponentsService interface public class CusomtDroolsKieServerApplicationComponentsService implements KieServerApplicationComponentsService { 1 private static final String OWNER_EXTENSION = "Drools"; 2 public Collection<Object> getAppComponents(String extension, SupportedTransports type, Object... services) { 3 // Do not accept calls from extensions other than the owner extension: if ( !OWNER_EXTENSION.equals(extension) ) { return Collections.emptyList(); } RulesExecutionService rulesExecutionService = null; 4 KieServerRegistry context = null; for( Object object : services ) { if( RulesExecutionService.class.isAssignableFrom(object.getClass()) ) { rulesExecutionService = (RulesExecutionService) object; continue; } else if( KieServerRegistry.class.isAssignableFrom(object.getClass()) ) { context = (KieServerRegistry) object; continue; } } List<Object> components = new ArrayList<Object>(1); if( SupportedTransports.REST.equals(type) ) { components.add(new CustomResource(rulesExecutionService, context)); 5 } return components; } } 1 Delivers REST endpoints to the KIE Server infrastructure that is deployed when the application starts. 2 Specifies the extension that you are extending, such as the Drools extension in this example. 3 Returns all resources that the REST container must deploy. Each extension that is enabled in your KIE Server instance calls the getAppComponents method, so the if ( !OWNER_EXTENSION.equals(extension) ) call returns an empty collection for any extensions other than the specified OWNER_EXTENSION extension. 4 Lists the services from the specified extension that you want to use, such as the RulesExecutionService and KieServerRegistry services from the Drools extension in this example. 5 Specifies the transport type for the extension, either REST or JMS ( REST in this example), and the CustomResource class that returns the resource as part of the components list. Implement the CustomResource class that KIE Server can use to provide the additional functionality for the new REST resource, as shown in the following example: Sample implementation of the CustomResource class // Custom base endpoint: @Path("server/containers/instances/{containerId}/ksession") public class CustomResource { private static final Logger logger = LoggerFactory.getLogger(CustomResource.class); private KieCommands commandsFactory = KieServices.Factory.get().getCommands(); private RulesExecutionService rulesExecutionService; private KieServerRegistry registry; public CustomResource() { } public CustomResource(RulesExecutionService rulesExecutionService, KieServerRegistry registry) { this.rulesExecutionService = rulesExecutionService; this.registry = registry; } // Supported HTTP method, path parameters, and data formats: @POST @Path("/{ksessionId}") @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response insertFireReturn(@Context HttpHeaders headers, @PathParam("containerId") String id, @PathParam("ksessionId") String ksessionId, String cmdPayload) { Variant v = getVariant(headers); String contentType = getContentType(headers); // Marshalling behavior and supported actions: MarshallingFormat format = MarshallingFormat.fromType(contentType); if (format == null) { format = MarshallingFormat.valueOf(contentType); } try { KieContainerInstance kci = registry.getContainer(id); Marshaller marshaller = kci.getMarshaller(format); List<?> listOfFacts = marshaller.unmarshall(cmdPayload, List.class); List<Command<?>> commands = new ArrayList<Command<?>>(); BatchExecutionCommand executionCommand = commandsFactory.newBatchExecution(commands, ksessionId); for (Object fact : listOfFacts) { commands.add(commandsFactory.newInsert(fact, fact.toString())); } commands.add(commandsFactory.newFireAllRules()); commands.add(commandsFactory.newGetObjects()); ExecutionResults results = rulesExecutionService.call(kci, executionCommand); String result = marshaller.marshall(results); logger.debug("Returning OK response with content '{}'", result); return createResponse(result, v, Response.Status.OK); } catch (Exception e) { // If marshalling fails, return the `call-container` response to maintain backward compatibility: String response = "Execution failed with error : " + e.getMessage(); logger.debug("Returning Failure response with content '{}'", response); return createResponse(response, v, Response.Status.INTERNAL_SERVER_ERROR); } } } In this example, the CustomResource class for the custom endpoint specifies the following data and behavior: Uses the base endpoint server/containers/instances/{containerId}/ksession Uses POST HTTP method Expects the following data to be given in REST requests: The containerId as a path argument The ksessionId as a path argument List of facts as a message payload Supports all KIE Server data formats: XML (JAXB, XStream) JSON Unmarshals the payload into a List<?> collection and, for each item in the list, creates an InsertCommand instance followed by FireAllRules and GetObject commands. Adds all commands to the BatchExecutionCommand instance that calls to the decision engine. To make the new endpoint discoverable for KIE Server, create a META-INF/services/org.kie.server.services.api.KieServerApplicationComponentsService file in your Maven project and add the fully qualified class name of the KieServerApplicationComponentsService implementation class within the file. For this example, the file contains the single line org.kie.server.ext.drools.rest.CusomtDroolsKieServerApplicationComponentsService . Build your project and copy the resulting JAR file into the ~/kie-server.war/WEB-INF/lib directory of your project. For example, on Red Hat JBoss EAP, the path to this directory is EAP_HOME /standalone/deployments/kie-server.war/WEB-INF/lib . Start KIE Server and deploy the built project to the running KIE Server. You can deploy the project using either the Business Central interface or the KIE Server REST API (a PUT request to http://SERVER:PORT/kie-server/services/rest/server/containers/{containerId} ). After your project is deployed on a running KIE Server, you can start interacting with your new REST endpoint. For this example, you can use the following information to invoke the new endpoint: Example request URL: http://localhost:8080/kie-server/services/rest/server/containers/instances/demo/ksession/defaultKieSession HTTP method: POST HTTP headers: Content-Type: application/json Accept: application/json Example message payload: [ { "org.jbpm.test.Person": { "name": "john", "age": 25 } }, { "org.jbpm.test.Person": { "name": "mary", "age": 22 } } ] Example server response: 200 (success) Example server log output: 16.2. Extending KIE Server to use a custom data transport By default, KIE Server extensions are exposed through REST or JMS data transports. You can extend KIE Server to support a custom data transport to adapt KIE Server transport protocols to your business needs. As an example, this procedure adds a custom data transport to KIE Server that uses the Drools extension and that is based on Apache MINA, an open-source Java network-application framework. The example custom MINA transport exchanges string-based data that relies on existing marshalling operations and supports only JSON format. Procedure Create an empty Maven project and define the following packaging type and dependencies in the pom.xml file for the project: Example pom.xml file in the sample project <packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-internal</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-drools</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.25</version> </dependency> <dependency> <groupId>org.apache.mina</groupId> <artifactId>mina-core</artifactId> <version>2.1.3</version> </dependency> </dependencies> Implement the org.kie.server.services.api.KieServerExtension interface in a Java class in your project, as shown in the following example: Sample implementation of the KieServerExtension interface public class MinaDroolsKieServerExtension implements KieServerExtension { private static final Logger logger = LoggerFactory.getLogger(MinaDroolsKieServerExtension.class); public static final String EXTENSION_NAME = "Drools-Mina"; private static final Boolean disabled = Boolean.parseBoolean(System.getProperty("org.kie.server.drools-mina.ext.disabled", "false")); private static final String MINA_HOST = System.getProperty("org.kie.server.drools-mina.ext.port", "localhost"); private static final int MINA_PORT = Integer.parseInt(System.getProperty("org.kie.server.drools-mina.ext.port", "9123")); // Taken from dependency on the `Drools` extension: private KieContainerCommandService batchCommandService; // Specific to MINA: private IoAcceptor acceptor; public boolean isActive() { return disabled == false; } public void init(KieServerImpl kieServer, KieServerRegistry registry) { KieServerExtension droolsExtension = registry.getServerExtension("Drools"); if (droolsExtension == null) { logger.warn("No Drools extension available, quitting..."); return; } List<Object> droolsServices = droolsExtension.getServices(); for( Object object : droolsServices ) { // If the given service is null (not configured), continue to the service: if (object == null) { continue; } if( KieContainerCommandService.class.isAssignableFrom(object.getClass()) ) { batchCommandService = (KieContainerCommandService) object; continue; } } if (batchCommandService != null) { acceptor = new NioSocketAcceptor(); acceptor.getFilterChain().addLast( "codec", new ProtocolCodecFilter( new TextLineCodecFactory( Charset.forName( "UTF-8" )))); acceptor.setHandler( new TextBasedIoHandlerAdapter(batchCommandService) ); acceptor.getSessionConfig().setReadBufferSize( 2048 ); acceptor.getSessionConfig().setIdleTime( IdleStatus.BOTH_IDLE, 10 ); try { acceptor.bind( new InetSocketAddress(MINA_HOST, MINA_PORT) ); logger.info("{} -- Mina server started at {} and port {}", toString(), MINA_HOST, MINA_PORT); } catch (IOException e) { logger.error("Unable to start Mina acceptor due to {}", e.getMessage(), e); } } } public void destroy(KieServerImpl kieServer, KieServerRegistry registry) { if (acceptor != null) { acceptor.dispose(); acceptor = null; } logger.info("{} -- Mina server stopped", toString()); } public void createContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters) { // Empty, already handled by the `Drools` extension } public void disposeContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters) { // Empty, already handled by the `Drools` extension } public List<Object> getAppComponents(SupportedTransports type) { // Nothing for supported transports (REST or JMS) return Collections.emptyList(); } public <T> T getAppComponents(Class<T> serviceType) { return null; } public String getImplementedCapability() { return "BRM-Mina"; } public List<Object> getServices() { return Collections.emptyList(); } public String getExtensionName() { return EXTENSION_NAME; } public Integer getStartOrder() { return 20; } @Override public String toString() { return EXTENSION_NAME + " KIE Server extension"; } } The KieServerExtension interface is the main extension interface that KIE Server can use to provide the additional functionality for the new MINA transport. The interface consists of the following components: Overview of the KieServerExtension interface public interface KieServerExtension { boolean isActive(); void init(KieServerImpl kieServer, KieServerRegistry registry); void destroy(KieServerImpl kieServer, KieServerRegistry registry); void createContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters); void disposeContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters); List<Object> getAppComponents(SupportedTransports type); <T> T getAppComponents(Class<T> serviceType); String getImplementedCapability(); 1 List<Object> getServices(); String getExtensionName(); 2 Integer getStartOrder(); 3 } 1 Specifies the capability that is covered by this extension. The capability must be unique within KIE Server. 2 Defines a human-readable name for the extension. 3 Determines when the specified extension should be started. For extensions that have dependencies on other extensions, this setting must not conflict with the parent setting. For example, in this case, this custom extension depends on the Drools extension, which has StartOrder set to 0 , so this custom add-on extension must be greater than 0 (set to 20 in the sample implementation). In the MinaDroolsKieServerExtension sample implementation of this interface, the init method is the main element for collecting services from the Drools extension and for bootstrapping the MINA server. All other methods in the KieServerExtension interface can remain with the standard implementation to fulfill interface requirements. The TextBasedIoHandlerAdapter class is the handler on the MINA server that reacts to incoming requests. Implement the TextBasedIoHandlerAdapter handler for the MINA server, as shown in the following example: Sample implementation of the TextBasedIoHandlerAdapter handler public class TextBasedIoHandlerAdapter extends IoHandlerAdapter { private static final Logger logger = LoggerFactory.getLogger(TextBasedIoHandlerAdapter.class); private KieContainerCommandService batchCommandService; public TextBasedIoHandlerAdapter(KieContainerCommandService batchCommandService) { this.batchCommandService = batchCommandService; } @Override public void messageReceived( IoSession session, Object message ) throws Exception { String completeMessage = message.toString(); logger.debug("Received message '{}'", completeMessage); if( completeMessage.trim().equalsIgnoreCase("quit") || completeMessage.trim().equalsIgnoreCase("exit") ) { session.close(false); return; } String[] elements = completeMessage.split("\\|"); logger.debug("Container id {}", elements[0]); try { ServiceResponse<String> result = batchCommandService.callContainer(elements[0], elements[1], MarshallingFormat.JSON, null); if (result.getType().equals(ServiceResponse.ResponseType.SUCCESS)) { session.write(result.getResult()); logger.debug("Successful message written with content '{}'", result.getResult()); } else { session.write(result.getMsg()); logger.debug("Failure message written with content '{}'", result.getMsg()); } } catch (Exception e) { } } } In this example, the handler class receives text messages and executes them in the Drools service. Consider the following handler requirements and behavior when you use the TextBasedIoHandlerAdapter handler implementation: Anything that you submit to the handler must be a single line because each incoming transport request is a single line. You must pass a KIE container ID in this single line so that the handler expects the format containerID|payload . You can set a response in the way that it is produced by the marshaller. The response can be multiple lines. The handler supports a stream mode that enables you to send commands without disconnecting from a KIE Server session. To end a KIE Server session in stream mode, send either an exit or quit command to the server. To make the new data transport discoverable for KIE Server, create a META-INF/services/org.kie.server.services.api.KieServerExtension file in your Maven project and add the fully qualified class name of the KieServerExtension implementation class within the file. For this example, the file contains the single line org.kie.server.ext.mina.MinaDroolsKieServerExtension . Build your project and copy the resulting JAR file and the mina-core-2.0.9.jar file (which the extension depends on in this example) into the ~/kie-server.war/WEB-INF/lib directory of your project. For example, on Red Hat JBoss EAP, the path to this directory is EAP_HOME /standalone/deployments/kie-server.war/WEB-INF/lib . Start the KIE Server and deploy the built project to the running KIE Server. You can deploy the project using either the Business Central interface or the KIE Server REST API (a PUT request to http://SERVER:PORT/kie-server/services/rest/server/containers/{containerId} ). After your project is deployed on a running KIE Server, you can view the status of the new data transport in your KIE Server log and start using your new data transport: New data transport in the server log For this example, you can use Telnet to interact with the new MINA-based data transport in KIE Server: Starting Telnet and connecting to KIE Server on port 9123 in a command terminal telnet 127.0.0.1 9123 Example interactions with KIE Server in a command terminal Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. # Request body: demo|{"lookup":"defaultKieSession","commands":[{"insert":{"object":{"org.jbpm.test.Person":{"name":"john","age":25}}}},{"fire-all-rules":""}]} # Server response: { "results" : [ { "key" : "", "value" : 1 } ], "facts" : [ ] } demo|{"lookup":"defaultKieSession","commands":[{"insert":{"object":{"org.jbpm.test.Person":{"name":"mary","age":22}}}},{"fire-all-rules":""}]} { "results" : [ { "key" : "", "value" : 1 } ], "facts" : [ ] } demo|{"lookup":"defaultKieSession","commands":[{"insert":{"object":{"org.jbpm.test.Person":{"name":"james","age":25}}}},{"fire-all-rules":""}]} { "results" : [ { "key" : "", "value" : 1 } ], "facts" : [ ] } exit Connection closed by foreign host. Example server log output 16.3. Extending the KIE Server client with a custom client API KIE Server uses predefined client APIs that you can interact with to use KIE Server services. You can extend the KIE Server client with a custom client API to adapt KIE Server services to your business needs. As an example, this procedure adds a custom client API to KIE Server to accommodate a custom data transport (configured previously for this scenario) that is based on Apache MINA, an open-source Java network-application framework. Procedure Create an empty Maven project and define the following packaging type and dependencies in the pom.xml file for the project: Example pom.xml file in the sample project <packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <version>USD{version.org.kie}</version> </dependency> </dependencies> Implement the relevant ServicesClient interface in a Java class in your project, as shown in the following example: Sample RulesMinaServicesClient interface public interface RulesMinaServicesClient extends RuleServicesClient { } A specific interface is required because you must register client implementations based on the interface, and you can have only one implementation for a given interface. For this example, the custom MINA-based data transport uses the Drools extension, so this example RulesMinaServicesClient interface extends the existing RuleServicesClient client API from the Drools extension. Implement the RulesMinaServicesClient interface that KIE Server can use to provide the additional client functionality for the new MINA transport, as shown in the following example: Sample implementation of the RulesMinaServicesClient interface public class RulesMinaServicesClientImpl implements RulesMinaServicesClient { private String host; private Integer port; private Marshaller marshaller; public RulesMinaServicesClientImpl(KieServicesConfiguration configuration, ClassLoader classloader) { String[] serverDetails = configuration.getServerUrl().split(":"); this.host = serverDetails[0]; this.port = Integer.parseInt(serverDetails[1]); this.marshaller = MarshallerFactory.getMarshaller(configuration.getExtraJaxbClasses(), MarshallingFormat.JSON, classloader); } public ServiceResponse<String> executeCommands(String id, String payload) { try { String response = sendReceive(id, payload); if (response.startsWith("{")) { return new ServiceResponse<String>(ResponseType.SUCCESS, null, response); } else { return new ServiceResponse<String>(ResponseType.FAILURE, response); } } catch (Exception e) { throw new KieServicesException("Unable to send request to KIE Server", e); } } public ServiceResponse<String> executeCommands(String id, Command<?> cmd) { try { String response = sendReceive(id, marshaller.marshall(cmd)); if (response.startsWith("{")) { return new ServiceResponse<String>(ResponseType.SUCCESS, null, response); } else { return new ServiceResponse<String>(ResponseType.FAILURE, response); } } catch (Exception e) { throw new KieServicesException("Unable to send request to KIE Server", e); } } protected String sendReceive(String containerId, String content) throws Exception { // Flatten the content to be single line: content = content.replaceAll("\\n", ""); Socket minaSocket = null; PrintWriter out = null; BufferedReader in = null; StringBuffer data = new StringBuffer(); try { minaSocket = new Socket(host, port); out = new PrintWriter(minaSocket.getOutputStream(), true); in = new BufferedReader(new InputStreamReader(minaSocket.getInputStream())); // Prepare and send data: out.println(containerId + "|" + content); // Wait for the first line: data.append(in.readLine()); // Continue as long as data is available: while (in.ready()) { data.append(in.readLine()); } return data.toString(); } finally { out.close(); in.close(); minaSocket.close(); } } } This example implementation specifies the following data and behavior: Uses socket-based communication for simplicity Relies on default configurations from the KIE Server client and uses ServerUrl for providing the host and port of the MINA server Specifies JSON as the marshalling format Requires received messages to be JSON objects that start with an open bracket { Uses direct socket communication with a blocking API while waiting for the first line of the response and then reads all lines that are available Does not use stream mode and therefore disconnects the KIE Server session after invoking a command Implement the org.kie.server.client.helper.KieServicesClientBuilder interface in a Java class in your project, as shown in the following example: Sample implementation of the KieServicesClientBuilder interface public class MinaClientBuilderImpl implements KieServicesClientBuilder { 1 public String getImplementedCapability() { 2 return "BRM-Mina"; } public Map<Class<?>, Object> build(KieServicesConfiguration configuration, ClassLoader classLoader) { 3 Map<Class<?>, Object> services = new HashMap<Class<?>, Object>(); services.put(RulesMinaServicesClient.class, new RulesMinaServicesClientImpl(configuration, classLoader)); return services; } } 1 Enables you to provide additional client APIs to the generic KIE Server client infrastructure 2 Defines the KIE Server capability (extension) that the client uses 3 Provides a map of the client implementations, where the key is the interface and the value is the fully initialized implementation To make the new client API discoverable for the KIE Server client, create a META-INF/services/org.kie.server.client.helper.KieServicesClientBuilder file in your Maven project and add the fully qualified class name of the KieServicesClientBuilder implementation class within the file. For this example, the file contains the single line org.kie.server.ext.mina.client.MinaClientBuilderImpl . Build your project and copy the resulting JAR file into the ~/kie-server.war/WEB-INF/lib directory of your project. For example, on Red Hat JBoss EAP, the path to this directory is EAP_HOME /standalone/deployments/kie-server.war/WEB-INF/lib . Start KIE Server and deploy the built project to the running KIE Server. You can deploy the project using either the Business Central interface or the KIE Server REST API (a PUT request to http://SERVER:PORT/kie-server/services/rest/server/containers/{containerId} ). After your project is deployed on a running KIE Server, you can start interacting with your new KIE Server client. You use your new client in the same way as the standard KIE Server client, by creating the client configuration and client instance, retrieving the service client by type, and invoking client methods. For this example, you can create a RulesMinaServiceClient client instance and invoke operations on KIE Server through the MINA transport: Sample implementation to create the RulesMinaServiceClient client protected RulesMinaServicesClient buildClient() { KieServicesConfiguration configuration = KieServicesFactory.newRestConfiguration("localhost:9123", null, null); List<String> capabilities = new ArrayList<String>(); // Explicitly add capabilities (the MINA client does not respond to `get-server-info` requests): capabilities.add("BRM-Mina"); configuration.setCapabilities(capabilities); configuration.setMarshallingFormat(MarshallingFormat.JSON); configuration.addJaxbClasses(extraClasses); KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(configuration); RulesMinaServicesClient rulesClient = kieServicesClient.getServicesClient(RulesMinaServicesClient.class); return rulesClient; } Sample configuration to invoke operations on KIE Server through the MINA transport RulesMinaServicesClient rulesClient = buildClient(); List<Command<?>> commands = new ArrayList<Command<?>>(); BatchExecutionCommand executionCommand = commandsFactory.newBatchExecution(commands, "defaultKieSession"); Person person = new Person(); person.setName("mary"); commands.add(commandsFactory.newInsert(person, "person")); commands.add(commandsFactory.newFireAllRules("fired")); ServiceResponse<String> response = rulesClient.executeCommands(containerId, executionCommand); Assert.assertNotNull(response); Assert.assertEquals(ResponseType.SUCCESS, response.getType()); String data = response.getResult(); Marshaller marshaller = MarshallerFactory.getMarshaller(extraClasses, MarshallingFormat.JSON, this.getClass().getClassLoader()); ExecutionResultImpl results = marshaller.unmarshall(data, ExecutionResultImpl.class); Assert.assertNotNull(results); Object personResult = results.getValue("person"); Assert.assertTrue(personResult instanceof Person); Assert.assertEquals("mary", ((Person) personResult).getName()); Assert.assertEquals("JBoss Community", ((Person) personResult).getAddress()); Assert.assertEquals(true, ((Person) personResult).isRegistered());
[ "http://SERVER:PORT/kie-server/services/rest/server", "{ \"type\": \"SUCCESS\", \"msg\": \"Kie Server info\", \"result\": { \"kie-server-info\": { \"id\": \"test-kie-server\", \"version\": \"7.67.0.20190818-050814\", \"name\": \"test-kie-server\", \"location\": \"http://localhost:8080/kie-server/services/rest/server\", \"capabilities\": [ \"KieServer\", \"BRM\", \"BRP\", \"DMN\", \"Swagger\" ], \"messages\": [ { \"severity\": \"INFO\", \"timestamp\": { \"java.util.Date\": 1566169865791 }, \"content\": [ \"Server KieServerInfo{serverId='test-kie-server', version='7.67.0.20190818-050814', name='test-kie-server', location='http:/localhost:8080/kie-server/services/rest/server', capabilities=[KieServer, BRM, BRP, DMN, Swagger]', messages=null', mode=DEVELOPMENT}started successfully at Sun Aug 18 23:11:05 UTC 2019\" ] } ], \"mode\": \"DEVELOPMENT\" } } }", "/server/containers/instances/{containerId}/ksession/{ksessionId}", "<packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-internal</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-drools</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-rest-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.25</version> </dependency> </dependencies>", "public class CusomtDroolsKieServerApplicationComponentsService implements KieServerApplicationComponentsService { 1 private static final String OWNER_EXTENSION = \"Drools\"; 2 public Collection<Object> getAppComponents(String extension, SupportedTransports type, Object... services) { 3 // Do not accept calls from extensions other than the owner extension: if ( !OWNER_EXTENSION.equals(extension) ) { return Collections.emptyList(); } RulesExecutionService rulesExecutionService = null; 4 KieServerRegistry context = null; for( Object object : services ) { if( RulesExecutionService.class.isAssignableFrom(object.getClass()) ) { rulesExecutionService = (RulesExecutionService) object; continue; } else if( KieServerRegistry.class.isAssignableFrom(object.getClass()) ) { context = (KieServerRegistry) object; continue; } } List<Object> components = new ArrayList<Object>(1); if( SupportedTransports.REST.equals(type) ) { components.add(new CustomResource(rulesExecutionService, context)); 5 } return components; } }", "// Custom base endpoint: @Path(\"server/containers/instances/{containerId}/ksession\") public class CustomResource { private static final Logger logger = LoggerFactory.getLogger(CustomResource.class); private KieCommands commandsFactory = KieServices.Factory.get().getCommands(); private RulesExecutionService rulesExecutionService; private KieServerRegistry registry; public CustomResource() { } public CustomResource(RulesExecutionService rulesExecutionService, KieServerRegistry registry) { this.rulesExecutionService = rulesExecutionService; this.registry = registry; } // Supported HTTP method, path parameters, and data formats: @POST @Path(\"/{ksessionId}\") @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response insertFireReturn(@Context HttpHeaders headers, @PathParam(\"containerId\") String id, @PathParam(\"ksessionId\") String ksessionId, String cmdPayload) { Variant v = getVariant(headers); String contentType = getContentType(headers); // Marshalling behavior and supported actions: MarshallingFormat format = MarshallingFormat.fromType(contentType); if (format == null) { format = MarshallingFormat.valueOf(contentType); } try { KieContainerInstance kci = registry.getContainer(id); Marshaller marshaller = kci.getMarshaller(format); List<?> listOfFacts = marshaller.unmarshall(cmdPayload, List.class); List<Command<?>> commands = new ArrayList<Command<?>>(); BatchExecutionCommand executionCommand = commandsFactory.newBatchExecution(commands, ksessionId); for (Object fact : listOfFacts) { commands.add(commandsFactory.newInsert(fact, fact.toString())); } commands.add(commandsFactory.newFireAllRules()); commands.add(commandsFactory.newGetObjects()); ExecutionResults results = rulesExecutionService.call(kci, executionCommand); String result = marshaller.marshall(results); logger.debug(\"Returning OK response with content '{}'\", result); return createResponse(result, v, Response.Status.OK); } catch (Exception e) { // If marshalling fails, return the `call-container` response to maintain backward compatibility: String response = \"Execution failed with error : \" + e.getMessage(); logger.debug(\"Returning Failure response with content '{}'\", response); return createResponse(response, v, Response.Status.INTERNAL_SERVER_ERROR); } } }", "[ { \"org.jbpm.test.Person\": { \"name\": \"john\", \"age\": 25 } }, { \"org.jbpm.test.Person\": { \"name\": \"mary\", \"age\": 22 } } ]", "13:37:20,347 INFO [stdout] (default task-24) Hello mary 13:37:20,348 INFO [stdout] (default task-24) Hello john", "<packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-internal</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-drools</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.25</version> </dependency> <dependency> <groupId>org.apache.mina</groupId> <artifactId>mina-core</artifactId> <version>2.1.3</version> </dependency> </dependencies>", "public class MinaDroolsKieServerExtension implements KieServerExtension { private static final Logger logger = LoggerFactory.getLogger(MinaDroolsKieServerExtension.class); public static final String EXTENSION_NAME = \"Drools-Mina\"; private static final Boolean disabled = Boolean.parseBoolean(System.getProperty(\"org.kie.server.drools-mina.ext.disabled\", \"false\")); private static final String MINA_HOST = System.getProperty(\"org.kie.server.drools-mina.ext.port\", \"localhost\"); private static final int MINA_PORT = Integer.parseInt(System.getProperty(\"org.kie.server.drools-mina.ext.port\", \"9123\")); // Taken from dependency on the `Drools` extension: private KieContainerCommandService batchCommandService; // Specific to MINA: private IoAcceptor acceptor; public boolean isActive() { return disabled == false; } public void init(KieServerImpl kieServer, KieServerRegistry registry) { KieServerExtension droolsExtension = registry.getServerExtension(\"Drools\"); if (droolsExtension == null) { logger.warn(\"No Drools extension available, quitting...\"); return; } List<Object> droolsServices = droolsExtension.getServices(); for( Object object : droolsServices ) { // If the given service is null (not configured), continue to the next service: if (object == null) { continue; } if( KieContainerCommandService.class.isAssignableFrom(object.getClass()) ) { batchCommandService = (KieContainerCommandService) object; continue; } } if (batchCommandService != null) { acceptor = new NioSocketAcceptor(); acceptor.getFilterChain().addLast( \"codec\", new ProtocolCodecFilter( new TextLineCodecFactory( Charset.forName( \"UTF-8\" )))); acceptor.setHandler( new TextBasedIoHandlerAdapter(batchCommandService) ); acceptor.getSessionConfig().setReadBufferSize( 2048 ); acceptor.getSessionConfig().setIdleTime( IdleStatus.BOTH_IDLE, 10 ); try { acceptor.bind( new InetSocketAddress(MINA_HOST, MINA_PORT) ); logger.info(\"{} -- Mina server started at {} and port {}\", toString(), MINA_HOST, MINA_PORT); } catch (IOException e) { logger.error(\"Unable to start Mina acceptor due to {}\", e.getMessage(), e); } } } public void destroy(KieServerImpl kieServer, KieServerRegistry registry) { if (acceptor != null) { acceptor.dispose(); acceptor = null; } logger.info(\"{} -- Mina server stopped\", toString()); } public void createContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters) { // Empty, already handled by the `Drools` extension } public void disposeContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters) { // Empty, already handled by the `Drools` extension } public List<Object> getAppComponents(SupportedTransports type) { // Nothing for supported transports (REST or JMS) return Collections.emptyList(); } public <T> T getAppComponents(Class<T> serviceType) { return null; } public String getImplementedCapability() { return \"BRM-Mina\"; } public List<Object> getServices() { return Collections.emptyList(); } public String getExtensionName() { return EXTENSION_NAME; } public Integer getStartOrder() { return 20; } @Override public String toString() { return EXTENSION_NAME + \" KIE Server extension\"; } }", "public interface KieServerExtension { boolean isActive(); void init(KieServerImpl kieServer, KieServerRegistry registry); void destroy(KieServerImpl kieServer, KieServerRegistry registry); void createContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters); void disposeContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters); List<Object> getAppComponents(SupportedTransports type); <T> T getAppComponents(Class<T> serviceType); String getImplementedCapability(); 1 List<Object> getServices(); String getExtensionName(); 2 Integer getStartOrder(); 3 }", "public class TextBasedIoHandlerAdapter extends IoHandlerAdapter { private static final Logger logger = LoggerFactory.getLogger(TextBasedIoHandlerAdapter.class); private KieContainerCommandService batchCommandService; public TextBasedIoHandlerAdapter(KieContainerCommandService batchCommandService) { this.batchCommandService = batchCommandService; } @Override public void messageReceived( IoSession session, Object message ) throws Exception { String completeMessage = message.toString(); logger.debug(\"Received message '{}'\", completeMessage); if( completeMessage.trim().equalsIgnoreCase(\"quit\") || completeMessage.trim().equalsIgnoreCase(\"exit\") ) { session.close(false); return; } String[] elements = completeMessage.split(\"\\\\|\"); logger.debug(\"Container id {}\", elements[0]); try { ServiceResponse<String> result = batchCommandService.callContainer(elements[0], elements[1], MarshallingFormat.JSON, null); if (result.getType().equals(ServiceResponse.ResponseType.SUCCESS)) { session.write(result.getResult()); logger.debug(\"Successful message written with content '{}'\", result.getResult()); } else { session.write(result.getMsg()); logger.debug(\"Failure message written with content '{}'\", result.getMsg()); } } catch (Exception e) { } } }", "Drools-Mina KIE Server extension -- Mina server started at localhost and port 9123 Drools-Mina KIE Server extension has been successfully registered as server extension", "telnet 127.0.0.1 9123", "Trying 127.0.0.1 Connected to localhost. Escape character is '^]'. Request body: demo|{\"lookup\":\"defaultKieSession\",\"commands\":[{\"insert\":{\"object\":{\"org.jbpm.test.Person\":{\"name\":\"john\",\"age\":25}}}},{\"fire-all-rules\":\"\"}]} Server response: { \"results\" : [ { \"key\" : \"\", \"value\" : 1 } ], \"facts\" : [ ] } demo|{\"lookup\":\"defaultKieSession\",\"commands\":[{\"insert\":{\"object\":{\"org.jbpm.test.Person\":{\"name\":\"mary\",\"age\":22}}}},{\"fire-all-rules\":\"\"}]} { \"results\" : [ { \"key\" : \"\", \"value\" : 1 } ], \"facts\" : [ ] } demo|{\"lookup\":\"defaultKieSession\",\"commands\":[{\"insert\":{\"object\":{\"org.jbpm.test.Person\":{\"name\":\"james\",\"age\":25}}}},{\"fire-all-rules\":\"\"}]} { \"results\" : [ { \"key\" : \"\", \"value\" : 1 } ], \"facts\" : [ ] } exit Connection closed by foreign host.", "16:33:40,206 INFO [stdout] (NioProcessor-2) Hello john 16:34:03,877 INFO [stdout] (NioProcessor-2) Hello mary 16:34:19,800 INFO [stdout] (NioProcessor-2) Hello james", "<packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <version>USD{version.org.kie}</version> </dependency> </dependencies>", "public interface RulesMinaServicesClient extends RuleServicesClient { }", "public class RulesMinaServicesClientImpl implements RulesMinaServicesClient { private String host; private Integer port; private Marshaller marshaller; public RulesMinaServicesClientImpl(KieServicesConfiguration configuration, ClassLoader classloader) { String[] serverDetails = configuration.getServerUrl().split(\":\"); this.host = serverDetails[0]; this.port = Integer.parseInt(serverDetails[1]); this.marshaller = MarshallerFactory.getMarshaller(configuration.getExtraJaxbClasses(), MarshallingFormat.JSON, classloader); } public ServiceResponse<String> executeCommands(String id, String payload) { try { String response = sendReceive(id, payload); if (response.startsWith(\"{\")) { return new ServiceResponse<String>(ResponseType.SUCCESS, null, response); } else { return new ServiceResponse<String>(ResponseType.FAILURE, response); } } catch (Exception e) { throw new KieServicesException(\"Unable to send request to KIE Server\", e); } } public ServiceResponse<String> executeCommands(String id, Command<?> cmd) { try { String response = sendReceive(id, marshaller.marshall(cmd)); if (response.startsWith(\"{\")) { return new ServiceResponse<String>(ResponseType.SUCCESS, null, response); } else { return new ServiceResponse<String>(ResponseType.FAILURE, response); } } catch (Exception e) { throw new KieServicesException(\"Unable to send request to KIE Server\", e); } } protected String sendReceive(String containerId, String content) throws Exception { // Flatten the content to be single line: content = content.replaceAll(\"\\\\n\", \"\"); Socket minaSocket = null; PrintWriter out = null; BufferedReader in = null; StringBuffer data = new StringBuffer(); try { minaSocket = new Socket(host, port); out = new PrintWriter(minaSocket.getOutputStream(), true); in = new BufferedReader(new InputStreamReader(minaSocket.getInputStream())); // Prepare and send data: out.println(containerId + \"|\" + content); // Wait for the first line: data.append(in.readLine()); // Continue as long as data is available: while (in.ready()) { data.append(in.readLine()); } return data.toString(); } finally { out.close(); in.close(); minaSocket.close(); } } }", "public class MinaClientBuilderImpl implements KieServicesClientBuilder { 1 public String getImplementedCapability() { 2 return \"BRM-Mina\"; } public Map<Class<?>, Object> build(KieServicesConfiguration configuration, ClassLoader classLoader) { 3 Map<Class<?>, Object> services = new HashMap<Class<?>, Object>(); services.put(RulesMinaServicesClient.class, new RulesMinaServicesClientImpl(configuration, classLoader)); return services; } }", "protected RulesMinaServicesClient buildClient() { KieServicesConfiguration configuration = KieServicesFactory.newRestConfiguration(\"localhost:9123\", null, null); List<String> capabilities = new ArrayList<String>(); // Explicitly add capabilities (the MINA client does not respond to `get-server-info` requests): capabilities.add(\"BRM-Mina\"); configuration.setCapabilities(capabilities); configuration.setMarshallingFormat(MarshallingFormat.JSON); configuration.addJaxbClasses(extraClasses); KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(configuration); RulesMinaServicesClient rulesClient = kieServicesClient.getServicesClient(RulesMinaServicesClient.class); return rulesClient; }", "RulesMinaServicesClient rulesClient = buildClient(); List<Command<?>> commands = new ArrayList<Command<?>>(); BatchExecutionCommand executionCommand = commandsFactory.newBatchExecution(commands, \"defaultKieSession\"); Person person = new Person(); person.setName(\"mary\"); commands.add(commandsFactory.newInsert(person, \"person\")); commands.add(commandsFactory.newFireAllRules(\"fired\")); ServiceResponse<String> response = rulesClient.executeCommands(containerId, executionCommand); Assert.assertNotNull(response); Assert.assertEquals(ResponseType.SUCCESS, response.getType()); String data = response.getResult(); Marshaller marshaller = MarshallerFactory.getMarshaller(extraClasses, MarshallingFormat.JSON, this.getClass().getClassLoader()); ExecutionResultImpl results = marshaller.unmarshall(data, ExecutionResultImpl.class); Assert.assertNotNull(results); Object personResult = results.getValue(\"person\"); Assert.assertTrue(personResult instanceof Person); Assert.assertEquals(\"mary\", ((Person) personResult).getName()); Assert.assertEquals(\"JBoss Community\", ((Person) personResult).getAddress()); Assert.assertEquals(true, ((Person) personResult).isRegistered());" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/kie-server-extensions-con_execution-server
Chapter 6. Configure storage for OpenShift Container Platform services
Chapter 6. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as image registry, monitoring, and logging. The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have plenty of storage capacity for these services. If the storage for these critical services runs out of space, the cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data sub section of Configuring persistent storage in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 6.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 6.2. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 6.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 6.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 6.3. Persistent Volume Claims attached to prometheus-k8s-* pod 6.3. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 6.3.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 6.3.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 6.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide.
[ "storage: pvc: claim: <new-pvc-name>", "storage: pvc: claim: ocs4registry", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, e.g. 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}", "spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd", "config.yaml: | openshift-storage: delete: days: 5" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/configure_storage_for_openshift_container_platform_services
7.113. libhbaapi
7.113. libhbaapi 7.113.1. RHEA-2013:0416 - libhbaapi enhancement update Updated libhbaapi packages that add one enhancement are now available for Red Hat Enterprise Linux 6. The libhbaapi library is the Host Bus Adapter (HBA) API library for Fibre Channel and Storage Area Network (SAN) resources. It contains a unified API that programmers can use to access, query, observe, and modify SAN and Fibre Channel services. Enhancement BZ#862386 This update converts libhbaapi code to a merged upstream repository at Open-FCoE.org. Consequently, the libhbaapi packages are no longer compiled from different sources, thus making maintenance and further development easier. Users of libhbaapi are not required to upgrade to these updated packages as the change introduced by them is purely formal and does not affect functionality.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/libhbaapi
Chapter 3. CredentialsRequest [cloudcredential.openshift.io/v1]
Chapter 3. CredentialsRequest [cloudcredential.openshift.io/v1] Description CredentialsRequest is the Schema for the credentialsrequests API Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CredentialsRequestSpec defines the desired state of CredentialsRequest status object CredentialsRequestStatus defines the observed state of CredentialsRequest 3.1.1. .spec Description CredentialsRequestSpec defines the desired state of CredentialsRequest Type object Required secretRef Property Type Description cloudTokenPath string cloudTokenPath is the path where the Kubernetes ServiceAccount token (JSON Web Token) is mounted on the deployment for the workload requesting a credentials secret. The presence of this field in combination with fields such as spec.providerSpec.stsIAMRoleARN indicate that CCO should broker creation of a credentials secret containing fields necessary for token based authentication methods such as with the AWS Secure Token Service (STS). cloudTokenPath may also be used to specify the azure_federated_token_file path used in Azure configuration secrets generated by ccoctl. Defaults to "/var/run/secrets/openshift/serviceaccount/token". providerSpec `` ProviderSpec contains the cloud provider specific credentials specification. secretRef object SecretRef points to the secret where the credentials should be stored once generated. serviceAccountNames array (string) ServiceAccountNames contains a list of ServiceAccounts that will use permissions associated with this CredentialsRequest. This is not used by CCO, but the information is needed for being able to properly set up access control in the cloud provider when the ServiceAccounts are used as part of the cloud credentials flow. 3.1.2. .spec.secretRef Description SecretRef points to the secret where the credentials should be stored once generated. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 3.1.3. .status Description CredentialsRequestStatus defines the observed state of CredentialsRequest Type object Required lastSyncGeneration provisioned Property Type Description conditions array Conditions includes detailed status for the CredentialsRequest conditions[] object CredentialsRequestCondition contains details for any of the conditions on a CredentialsRequest object lastSyncCloudCredsSecretResourceVersion string LastSyncCloudCredsSecretResourceVersion is the resource version of the cloud credentials secret resource when the credentials request resource was last synced. Used to determine if the cloud credentials have been updated since the last sync. lastSyncGeneration integer LastSyncGeneration is the generation of the credentials request resource that was last synced. Used to determine if the object has changed and requires a sync. lastSyncTimestamp string LastSyncTimestamp is the time that the credentials were last synced. providerStatus `` ProviderStatus contains cloud provider specific status. provisioned boolean Provisioned is true once the credentials have been initially provisioned. 3.1.4. .status.conditions Description Conditions includes detailed status for the CredentialsRequest Type array 3.1.5. .status.conditions[] Description CredentialsRequestCondition contains details for any of the conditions on a CredentialsRequest object Type object Required status type Property Type Description lastProbeTime string LastProbeTime is the last time we probed the condition lastTransitionTime string LastTransitionTime is the last time the condition transitioned from one status to another. message string Message is a human-readable message indicating details about the last transition reason string Reason is a unique, one-word, CamelCase reason for the condition's last transition status string Status is the status of the condition type string Type is the specific type of the condition 3.2. API endpoints The following API endpoints are available: /apis/cloudcredential.openshift.io/v1/credentialsrequests GET : list objects of kind CredentialsRequest /apis/cloudcredential.openshift.io/v1/namespaces/{namespace}/credentialsrequests DELETE : delete collection of CredentialsRequest GET : list objects of kind CredentialsRequest POST : create a CredentialsRequest /apis/cloudcredential.openshift.io/v1/namespaces/{namespace}/credentialsrequests/{name} DELETE : delete a CredentialsRequest GET : read the specified CredentialsRequest PATCH : partially update the specified CredentialsRequest PUT : replace the specified CredentialsRequest /apis/cloudcredential.openshift.io/v1/namespaces/{namespace}/credentialsrequests/{name}/status GET : read status of the specified CredentialsRequest PATCH : partially update status of the specified CredentialsRequest PUT : replace status of the specified CredentialsRequest 3.2.1. /apis/cloudcredential.openshift.io/v1/credentialsrequests HTTP method GET Description list objects of kind CredentialsRequest Table 3.1. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequestList schema 401 - Unauthorized Empty 3.2.2. /apis/cloudcredential.openshift.io/v1/namespaces/{namespace}/credentialsrequests HTTP method DELETE Description delete collection of CredentialsRequest Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind CredentialsRequest Table 3.3. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequestList schema 401 - Unauthorized Empty HTTP method POST Description create a CredentialsRequest Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body CredentialsRequest schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequest schema 201 - Created CredentialsRequest schema 202 - Accepted CredentialsRequest schema 401 - Unauthorized Empty 3.2.3. /apis/cloudcredential.openshift.io/v1/namespaces/{namespace}/credentialsrequests/{name} Table 3.7. Global path parameters Parameter Type Description name string name of the CredentialsRequest HTTP method DELETE Description delete a CredentialsRequest Table 3.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CredentialsRequest Table 3.10. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequest schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CredentialsRequest Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequest schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CredentialsRequest Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.14. Body parameters Parameter Type Description body CredentialsRequest schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequest schema 201 - Created CredentialsRequest schema 401 - Unauthorized Empty 3.2.4. /apis/cloudcredential.openshift.io/v1/namespaces/{namespace}/credentialsrequests/{name}/status Table 3.16. Global path parameters Parameter Type Description name string name of the CredentialsRequest HTTP method GET Description read status of the specified CredentialsRequest Table 3.17. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequest schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CredentialsRequest Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.19. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequest schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CredentialsRequest Table 3.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.21. Body parameters Parameter Type Description body CredentialsRequest schema Table 3.22. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequest schema 201 - Created CredentialsRequest schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/security_apis/credentialsrequest-cloudcredential-openshift-io-v1
Chapter 24. Installation configuration
Chapter 24. Installation configuration 24.1. Customizing nodes OpenShift Container Platform supports both cluster-wide and per-machine configuration via Ignition, which allows arbitrary partitioning and file content changes to the operating system. In general, if a configuration file is documented in Red Hat Enterprise Linux (RHEL), then modifying it via Ignition is supported. There are two ways to deploy machine config changes: Creating machine configs that are included in manifest files to start up a cluster during openshift-install . Creating machine configs that are passed to running OpenShift Container Platform nodes via the Machine Config Operator. Additionally, modifying the reference config, such as the Ignition config that is passed to coreos-installer when installing bare-metal nodes allows per-machine configuration. These changes are currently not visible to the Machine Config Operator. The following sections describe features that you might want to configure on your nodes in this way. 24.1.1. Creating machine configs with Butane Machine configs are used to configure control plane and worker machines by instructing machines how to create users and file systems, set up the network, install systemd units, and more. Because modifying machine configs can be difficult, you can use Butane configs to create machine configs for you, thereby making node configuration much easier. 24.1.1.1. About Butane Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. The format of the Butane config file that Butane accepts is defined in the OpenShift Butane config spec . 24.1.1.2. Installing Butane You can install the Butane tool ( butane ) to create OpenShift Container Platform machine configs from a command-line interface. You can install butane on Linux, Windows, or macOS by downloading the corresponding binary file. Tip Butane releases are backwards-compatible with older releases and with the Fedora CoreOS Config Transpiler (FCCT). Procedure Navigate to the Butane image download page at https://mirror.openshift.com/pub/openshift-v4/clients/butane/ . Get the butane binary: For the newest version of Butane, save the latest butane image to your current directory: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane Optional: For a specific type of architecture you are installing Butane on, such as aarch64 or ppc64le, indicate the appropriate URL. For example: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane Make the downloaded binary file executable: USD chmod +x butane Move the butane binary file to a directory on your PATH . To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification steps You can now use the Butane tool by running the butane command: USD butane <butane_file> 24.1.1.3. Creating a MachineConfig object by using Butane You can use Butane to produce a MachineConfig object so that you can configure worker or control plane nodes at installation time or via the Machine Config Operator. Prerequisites You have installed the butane utility. Procedure Create a Butane config file. The following example creates a file named 99-worker-custom.bu that configures the system console to show kernel debug messages and specifies custom settings for the chrony time service: variant: openshift version: 4.11.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony Note The 99-worker-custom.bu file is set to create a machine config for worker nodes. To deploy on control plane nodes, change the role from worker to master . To do both, you could repeat the whole procedure using different file names for the two types of deployments. Create a MachineConfig object by giving Butane the file that you created in the step: USD butane 99-worker-custom.bu -o ./99-worker-custom.yaml A MachineConfig object YAML file is created for you to finish configuring your machines. Save the Butane config in case you need to update the MachineConfig object in the future. If the cluster is not running yet, generate manifest files and add the MachineConfig object YAML file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-worker-custom.yaml Additional resources Adding kernel modules to nodes Encrypting and mirroring disks during installation 24.1.2. Adding day-1 kernel arguments Although it is often preferable to modify kernel arguments as a day-2 activity, you might want to add kernel arguments to all master or worker nodes during initial cluster installation. Here are some reasons you might want to add kernel arguments during cluster installation so they take effect before the systems first boot up: You need to do some low-level network configuration before the systems start. You want to disable a feature, such as SELinux, so it has no impact on the systems when they first come up. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. For a listing of arguments you can pass to a RHEL 8 kernel at boot time, see Kernel.org kernel parameters . It is best to only add kernel arguments with this procedure if they are needed to complete the initial OpenShift Container Platform installation. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. In the openshift directory, create a file (for example, 99-openshift-machineconfig-master-kargs.yaml ) to define a MachineConfig object to add the kernel settings. This example adds a loglevel=7 kernel argument to control plane nodes: USD cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF You can change master to worker to add kernel arguments to worker nodes instead. Create a separate YAML file to add to both master and worker nodes. You can now continue on to create the cluster. 24.1.3. Adding kernel modules to nodes For most common hardware, the Linux kernel includes the device driver modules needed to use that hardware when the computer starts up. For some hardware, however, modules are not available in Linux. Therefore, you must find a way to provide those modules to each host computer. This procedure describes how to do that for nodes in an OpenShift Container Platform cluster. When a kernel module is first deployed by following these instructions, the module is made available for the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy the module so a compatible version of that module is available with the new kernel. The way that this feature is able to keep the module up to date on each node is by: Adding a systemd service to each node that starts at boot time to detect if a new kernel has been installed and If a new kernel is detected, the service rebuilds the module and installs it to the kernel For information on the software needed for this procedure, see the kmods-via-containers github site. A few important issues to keep in mind: This procedure is Technology Preview. Software tools and examples are not yet available in official RPM form and can only be obtained for now from unofficial github.com sites noted in the procedure. Third-party kernel modules you might add through these procedures are not supported by Red Hat. In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8 container. Keep in mind that modules are rebuilt automatically on each node when that node gets a new kernel. For that reason, each node needs access to a yum repository that contains the kernel and related packages needed to rebuild the module. That content is best provided with a valid RHEL subscription. 24.1.3.1. Building and testing the kernel module container Before deploying kernel modules to your OpenShift Container Platform cluster, you can test the process on a separate RHEL system. Gather the kernel module's source code, the KVC framework, and the kmod-via-containers software. Then build and test the module. To do that on a RHEL 8 system, do the following: Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software that is required to build the software and container: # yum install podman make git -y Clone the kmod-via-containers repository: Create a folder for the repository: USD mkdir kmods; cd kmods Clone the repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a kmods-via-container systemd service and loads it: Change to the kmod-via-containers directory: USD cd kmods-via-containers/ Install the KVC framework instance: USD sudo make install Reload the systemd manager configuration: USD sudo systemctl daemon-reload Get the kernel module source code. The source code might be used to build a third-party module that you do not have control over, but is supplied by others. You will need content similar to the content shown in the kvc-simple-kmod example that can be cloned to your system as follows: USD cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod Edit the configuration file, simple-kmod.conf file, in this example, and change the name of the Dockerfile to Dockerfile.rhel : Change to the kvc-simple-kmod directory: USD cd kvc-simple-kmod Rename the Dockerfile: USD cat simple-kmod.conf Example Dockerfile KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES="simple-kmod simple-procfs-kmod" Create an instance of [email protected] for your kernel module, simple-kmod in this example: USD sudo make install Enable the [email protected] instance: USD sudo kmods-via-containers build simple-kmod USD(uname -r) Enable and start the systemd service: USD sudo systemctl enable [email protected] --now Review the service status: USD sudo systemctl status [email protected] Example output ● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago... To confirm that the kernel modules are loaded, use the lsmod command to list the modules: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 Optional. Use other methods to check that the simple-kmod example is working: Look for a "Hello world" message in the kernel ring buffer with dmesg : USD dmesg | grep 'Hello world' Example output [ 6420.761332] Hello world from simple_kmod. Check the value of simple-procfs-kmod in /proc : USD sudo cat /proc/simple-procfs-kmod Example output simple-procfs-kmod number = 0 Run the spkut command to get more information from the module: USD sudo spkut 44 Example output KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container... + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44 Going forward, when the system boots this service will check if a new kernel is running. If there is a new kernel, the service builds a new version of the kernel module and then loads it. If the module is already built, it will just load it. 24.1.3.2. Provisioning a kernel module to OpenShift Container Platform Depending on whether or not you must have the kernel module in place when OpenShift Container Platform cluster first boots, you can set up the kernel modules to be deployed in one of two ways: Provision kernel modules at cluster install time (day-1) : You can create the content as a MachineConfig object and provide it to openshift-install by including it with a set of manifest files. Provision kernel modules via Machine Config Operator (day-2) : If you can wait until the cluster is up and running to add your kernel module, you can deploy the kernel module software via the Machine Config Operator (MCO). In either case, each node needs to be able to get the kernel packages and related software packages at the time that a new kernel is detected. There are a few ways you can set up each node to be able to obtain that content. Provide RHEL entitlements to each node. Get RHEL entitlements from an existing RHEL host, from the /etc/pki/entitlement directory and copy them to the same location as the other files you provide when you build your Ignition config. Inside the Dockerfile, add pointers to a yum repository containing the kernel and other packages. This must include new kernel packages as they are needed to match newly installed kernels. 24.1.3.2.1. Provision kernel modules via a MachineConfig object By packaging kernel module software with a MachineConfig object, you can deliver that software to worker or control plane nodes at installation time or via the Machine Config Operator. Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software needed to build the software: # yum install podman make git -y Create a directory to host the kernel module and tooling: USD mkdir kmods; cd kmods Get the kmods-via-containers software: Clone the kmods-via-containers repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Clone the kvc-simple-kmod repository: USD git clone https://github.com/kmods-via-containers/kvc-simple-kmod Get your module software. In this example, kvc-simple-kmod is used. Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using the repositories cloned earlier: Create the directory: USD FAKEROOT=USD(mktemp -d) Change to the kmod-via-containers directory: USD cd kmods-via-containers Install the KVC framework instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Change to the kvc-simple-kmod directory: USD cd ../kvc-simple-kmod Create the instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Clone the fakeroot directory, replacing any symbolic links with copies of their targets, by running the following command: USD cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree Create a Butane config file, 99-simple-kmod.bu , that embeds the kernel module tree and enables the systemd service. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.11.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true 1 To deploy on control plane nodes, change worker to master . To deploy on both control plane and worker nodes, perform the remainder of these instructions once for each node type. Use Butane to generate a machine config YAML file, 99-simple-kmod.yaml , containing the files and configuration to be delivered: USD butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml If the cluster is not up yet, generate manifest files and add this file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-simple-kmod.yaml Your nodes will start the [email protected] service and the kernel modules will be loaded. To confirm that the kernel modules are loaded, you can log in to a node (using oc debug node/<openshift-node> , then chroot /host ). To list the modules, use the lsmod command: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 24.1.4. Encrypting and mirroring disks during installation During an OpenShift Container Platform installation, you can enable boot disk encryption and mirroring on the cluster nodes. 24.1.4.1. About disk encryption You can enable encryption for the boot disks on the control plane and compute nodes at installation time. OpenShift Container Platform supports the Trusted Platform Module (TPM) v2 and Tang encryption modes. TPM v2: This is the preferred mode. TPM v2 stores passphrases in a secure cryptoprocessor contained within a server. You can use this mode to prevent the boot disk data on a cluster node from being decrypted if the disk is removed from the server. Tang: Tang and Clevis are server and client components that enable network-bound disk encryption (NBDE). You can bind the boot disk data on your cluster nodes to one or more Tang servers. This prevents the data from being decrypted unless the nodes are on a secure network where the Tang servers can be accessed. Clevis is an automated decryption framework that is used to implement the decryption on the client side. Important The use of the Tang encryption mode to encrypt your disks is only supported for bare metal and vSphere installations on user-provisioned infrastructure. Note On versions of Red Hat Enterprise Linux CoreOS (RHCOS), disk encryption was configured by specifying /etc/clevis.json in the Ignition config. That file is not supported in clusters created with OpenShift Container Platform 4.7 or above, and disk encryption should be configured by using the following procedure. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. This feature: Is available for installer-provisioned infrastructure and user-provisioned infrastructure deployments Is supported on Red Hat Enterprise Linux CoreOS (RHCOS) systems only Sets up disk encryption during the manifest installation phase so all data written to disk, from first boot forward, is encrypted Requires no user intervention for providing passphrases Uses AES-256-XTS encryption, or AES-256-CBC if FIPS mode is enabled 24.1.4.1.1. Configuring an encryption threshold In OpenShift Container Platform, you can specify a requirement for more than one Tang server. You can also configure the TPM v2 and Tang encryption modes simultaneously, so that the boot disk data can be decrypted only if the TPM secure cryptoprocessor is present and the Tang servers can be accessed over a secure network. You can use the threshold attribute in your Butane configuration to define the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. The threshold is met when the stated value is reached through any combination of the declared conditions. For example, the threshold value of 2 in the following configuration can be reached by accessing the two Tang servers, or by accessing the TPM secure cryptoprocessor and one of the Tang servers: Example Butane configuration for disk encryption variant: openshift version: 4.11.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF threshold: 2 4 openshift: fips: true 1 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 2 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 3 Include this section if you want to use one or more Tang servers. 4 Specify the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. Important The default threshold value is 1 . If you include multiple encryption conditions in your configuration but do not specify a threshold, decryption can occur if any of the conditions are met. Note If you require both TPM v2 and Tang for decryption, the value of the threshold attribute must equal the total number of stated Tang servers plus one. If the threshold value is lower, it is possible for the threshold to be reached by using one of the encryption modes only. For example, if tpm2 is set to true and you specify two Tang servers, a threshold of 2 can be met by accessing the two Tang servers even if the TPM secure cryptoprocessor is not available. 24.1.4.2. About disk mirroring During OpenShift Container Platform installation on control plane and worker nodes, you can enable mirroring of the boot and other disks to two or more redundant storage devices. A node continues to function after storage device failure as long as one device remains available. Mirroring does not support replacement of a failed disk. To restore the mirror to a pristine, non-degraded state, reprovision the node. Note For user-provisioned infrastructure deployments, mirroring is available only on RHCOS systems. Support for mirroring is available on x86_64 nodes booted with BIOS or UEFI and on ppc64le nodes. 24.1.4.3. Configuring disk encryption and mirroring You can enable and configure encryption and mirroring during an OpenShift Container Platform installation. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. For more information, see the Creating machine configs with Butane section. You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure If you want to use TPM v2 to encrypt your cluster, check to see if TPM v2 encryption needs to be enabled in the BIOS on each node. This is required on most Dell systems. Check the manual for your computer. If you want to use Tang to encrypt your cluster, follow these preparatory steps: Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. Install the clevis package on a RHEL 8 machine, if it is not already installed: USD sudo yum install clevis On the RHEL 8 machine, run the following command to generate a thumbprint of the exchange key. Replace http://tang.example.com:7500 with the URL of your Tang server: USD clevis-encrypt-tang '{"url":"http://tang.example.com:7500"}' < /dev/null > /dev/null 1 1 In this example, tangd.socket is listening on port 7500 on the Tang server. Note The clevis-encrypt-tang command is used in this step only to generate a thumbprint of the exchange key. No data is being passed to the command for encryption at this point, so /dev/null is provided as an input instead of plain text. The encrypted output is also sent to /dev/null , because it is not required for this procedure. Example output The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1 1 The thumbprint of the exchange key. When the Do you wish to trust these keys? [ynYN] prompt displays, type Y . Note RHEL 8 provides Clevis version 15, which uses the SHA-1 hash algorithm to generate thumbprints. Some other distributions provide Clevis version 17 or later, which use the SHA-256 hash algorithm for thumbprints. You must use a Clevis version that uses SHA-1 to create the thumbprint, to prevent Clevis binding issues when you install Red Hat Enterprise Linux CoreOS (RHCOS) on your OpenShift Container Platform cluster nodes. If the nodes are configured with static IP addressing, run coreos-installer iso customize --dest-karg-append or use the coreos-installer --append-karg option when installing RHCOS nodes to set the IP address of the installed system. Append the ip= and other arguments needed for your network. Important Some methods for configuring static IPs do not affect the initramfs after the first boot and will not work with Tang encryption. These include the coreos-installer --copy-network option, the coreos-installer iso customize --network-keyfile option, and the coreos-installer pxe customize --network-keyfile option, as well as adding ip= arguments to the kernel command line of the live ISO or PXE image during installation. Incorrect static IP configuration causes the second boot of the node to fail. On your installation node, change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 Replace <installation_directory> with the path to the directory that you want to store the installation files in. Create a Butane config that configures disk encryption, mirroring, or both. For example, to configure storage for compute nodes, create a USDHOME/clusterconfig/worker-storage.bu file. Butane config example for a boot device variant: openshift version: 4.11.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 threshold: 1 9 mirror: 10 devices: 11 - /dev/sda - /dev/sdb openshift: fips: true 12 1 2 For control plane configurations, replace worker with master in both of these locations. 3 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 4 Include this section if you want to encrypt the root file system. For more details, see the About disk encryption section. 5 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 6 Include this section if you want to use one or more Tang servers. 7 Specify the URL of a Tang server. In this example, tangd.socket is listening on port 7500 on the Tang server. 8 Specify the exchange key thumbprint, which was generated in a preceding step. 9 Specify the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. The default value is 1 . For more information on this topic, see the Configuring an encryption threshold section. 10 Include this section if you want to mirror the boot disk. For more details, see About disk mirroring . 11 List all disk devices that should be included in the boot disk mirror, including the disk that RHCOS will be installed onto. 12 Include this directive to enable FIPS mode on your cluster. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . If you are configuring nodes to use both disk encryption and mirroring, both features must be configured in the same Butane configuration file. In addition, if you are configuring disk encryption on a node with FIPS mode enabled, you must include the fips directive in the same Butane configuration file, even if FIPS mode is also enabled in a separate manifest. Create a control plane or compute node manifest from the corresponding Butane configuration file and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml Repeat this step for each node type that requires disk encryption or mirroring. Save the Butane configuration file in case you need to update the manifests in the future. Continue with the remainder of the OpenShift Container Platform installation. Tip You can monitor the console log on the RHCOS nodes during installation for error messages relating to disk encryption or mirroring. Important If you configure additional data partitions, they will not be encrypted unless encryption is explicitly requested. Verification After installing OpenShift Container Platform, you can verify if boot disk encryption or mirroring is enabled on the cluster nodes. From the installation host, access a cluster node by using a debug pod: Start a debug pod for the node. The following example starts a debug pod for the compute-1 node: USD oc debug node/compute-1 Set /host as the root directory within the debug shell. The debug pod mounts the root file system of the node in /host within the pod. By changing the root directory to /host , you can run binaries contained in the executable paths on the node: # chroot /host Note OpenShift Container Platform cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. If you configured boot disk encryption, verify if it is enabled: From the debug shell, review the status of the root mapping on the node: # cryptsetup status root Example output /dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write 1 The encryption format. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. 2 The encryption algorithm used to encrypt the LUKS2 volume. The aes-cbc-essiv:sha256 cipher is used if FIPS mode is enabled. 3 The device that contains the encrypted LUKS2 volume. If mirroring is enabled, the value will represent a software mirror device, for example /dev/md126 . List the Clevis plugins that are bound to the encrypted device: # clevis luks list -d /dev/sda4 1 1 Specify the device that is listed in the device field in the output of the preceding step. Example output 1: sss '{"t":1,"pins":{"tang":[{"url":"http://tang.example.com:7500"}]}}' 1 1 In the example output, the Tang plugin is used by the Shamir's Secret Sharing (SSS) Clevis plugin for the /dev/sda4 device. If you configured mirroring, verify if it is enabled: From the debug shell, list the software RAID devices on the node: # cat /proc/mdstat Example output Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none> 1 In the example, the /dev/md126 software RAID mirror device uses the /dev/sda3 and /dev/sdb3 disk devices on the cluster node. 2 In the example, the /dev/md127 software RAID mirror device uses the /dev/sda4 and /dev/sdb4 disk devices on the cluster node. Review the details of each of the software RAID devices listed in the output of the preceding command. The following example lists the details of the /dev/md126 device: # mdadm --detail /dev/md126 Example output /dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8 1 Specifies the RAID level of the device. raid1 indicates RAID 1 disk mirroring. 2 Specifies the state of the RAID device. 3 4 States the number of underlying disk devices that are active and working. 5 States the number of underlying disk devices that are in a failed state. 6 The name of the software RAID device. 7 8 Provides information about the underlying disk devices that are used by the software RAID device. List the file systems that are mounted on the software RAID devices: # mount | grep /dev/md Example output /dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel) In the example output, the /boot file system is mounted on the /dev/md126 software RAID device and the root file system is mounted on /dev/md127 . Repeat the verification steps for each OpenShift Container Platform node type. Additional resources For more information about the TPM v2 and Tang encryption modes, see Configuring automated unlocking of encrypted volumes using policy-based decryption . 24.1.4.4. Configuring a RAID-enabled data volume You can enable software RAID partitioning to provide an external data volume. OpenShift Container Platform supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and RAID 10 for data protection and fault tolerance. See "About disk mirroring" for more details. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You have installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. For more information, see the Creating machine configs with Butane section. Procedure Create a Butane config that configures a data volume by using software RAID. To configure a data volume with RAID 1 on the same disks that are used for a mirrored boot disk, create a USDHOME/clusterconfig/raid1-storage.bu file, for example: RAID 1 on mirrored boot disk variant: openshift version: 4.11.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/sda - /dev/sdb storage: disks: - device: /dev/sda partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/sdb partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true 1 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. To configure a data volume with RAID 1 on secondary disks, create a USDHOME/clusterconfig/raid1-alt-storage.bu file, for example: RAID 1 on secondary disks variant: openshift version: 4.11.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true Create a RAID manifest from the Butane config you created in the step and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1 1 Replace <butane_config> and <manifest_name> with the file names from the step. For example, raid1-alt-storage.bu and raid1-alt-storage.yaml for secondary disks. Save the Butane config in case you need to update the manifest in the future. Continue with the remainder of the OpenShift Container Platform installation. 24.1.5. Configuring chrony time service You can set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.11.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 24.1.6. Additional resources For information on Butane, see Creating machine configs with Butane . For information on FIPS support, see Support for FIPS cryptography . 24.2. Configuring your firewall If you use a firewall, you must configure it so that OpenShift Container Platform can access the sites that it requires to function. You must always grant access to some sites, and you grant access to more if you use Red Hat Insights, the Telemetry service, a cloud to host your cluster, and certain build strategies. 24.2.1. Configuring your firewall for OpenShift Container Platform Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. There are no special configuration considerations for services running on only controller nodes compared to worker nodes. Note If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster. Procedure Allowlist the following registry URLs: URL Port Function registry.redhat.io 443 Provides core container images access.redhat.com [1] 443 Hosts all the container images that are stored on the Red Hat Ecosytem Catalog, including core container images. quay.io 443 Provides core container images cdn.quay.io 443 Provides core container images cdn01.quay.io 443 Provides core container images cdn02.quay.io 443 Provides core container images cdn03.quay.io 443 Provides core container images sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com In a firewall environment, ensure that the access.redhat.com resource is on the allowlist. This resource hosts a signature store that a container client requires for verifying images when pulling them from registry.access.redhat.com . You can use the wildcards *.quay.io and *.openshiftapps.com instead of cdn.quay.io and cdn0[1-3].quay.io in your allowlist. When you add a site, such as quay.io , to your allowlist, do not add a wildcard entry, such as *.quay.io , to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io . Allowlist any site that provides resources for a language or framework that your builds require. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights: URL Port Function cert-api.access.redhat.com 443 Required for Telemetry api.access.redhat.com 443 Required for Telemetry infogw.api.openshift.com 443 Required for Telemetry console.redhat.com 443 Required for Telemetry and for insights-operator If you use Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to host your cluster, you must grant access to the URLs that provide the cloud provider API and DNS for that cloud: Cloud URL Port Function Alibaba *.aliyuncs.com 443 Required to access Alibaba Cloud services and resources. Review the Alibaba endpoints_config.go file to determine the exact endpoints to allow for the regions that you use. AWS *.amazonaws.com Alternatively, if you choose to not use a wildcard for AWS APIs, you must allowlist the following URLs: 443 Required to access AWS services and resources. Review the AWS Service Endpoints in the AWS documentation to determine the exact endpoints to allow for the regions that you use. ec2.amazonaws.com 443 Used to install and manage clusters in an AWS environment. events.amazonaws.com 443 Used to install and manage clusters in an AWS environment. iam.amazonaws.com 443 Used to install and manage clusters in an AWS environment. route53.amazonaws.com 443 Used to install and manage clusters in an AWS environment. s3.amazonaws.com 443 Used to install and manage clusters in an AWS environment. s3.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. s3.dualstack.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. tagging.us-east-1.amazonaws.com 443 Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1 , regardless of the region the cluster is deployed in. ec2.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. elasticloadbalancing.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. servicequotas.<aws_region>.amazonaws.com 443 Required. Used to confirm quotas for deploying the service. tagging.<aws_region>.amazonaws.com 443 Allows the assignment of metadata about AWS resources in the form of tags. GCP *.googleapis.com 443 Required to access GCP services and resources. Review Cloud Endpoints in the GCP documentation to determine the endpoints to allow for your APIs. accounts.google.com 443 Required to access your GCP account. Azure management.azure.com 443 Required to access Azure services and resources. Review the Azure REST API reference in the Azure documentation to determine the endpoints to allow for your APIs. *.blob.core.windows.net 443 Required to download Ignition files. login.microsoftonline.com 443 Required to access Azure services and resources. Review the Azure REST API reference in the Azure documentation to determine the endpoints to allow for your APIs. Allowlist the following URLs: URL Port Function mirror.openshift.com 443 Required to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator needs only a single functioning source. storage.googleapis.com/openshift-release 443 A source of release image signatures, although the Cluster Version Operator needs only a single functioning source. *.apps.<cluster_name>.<base_domain> 443 Required to access the default cluster routes unless you set an ingress wildcard during installation. quayio-production-s3.s3.amazonaws.com 443 Required to access Quay image content in AWS. api.openshift.com 443 Required both for your cluster token and to check if updates are available for the cluster. rhcos.mirror.openshift.com 443 Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images. console.redhat.com 443 Required for your cluster token. sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com Operators require route access to perform health checks. Specifically, the authentication and web console Operators connect to two routes to verify that the routes work. If you are the cluster administrator and do not want to allow *.apps.<cluster_name>.<base_domain> , then allow these routes: oauth-openshift.apps.<cluster_name>.<base_domain> console-openshift-console.apps.<cluster_name>.<base_domain> , or the hostname that is specified in the spec.route.hostname field of the consoles.operator/cluster object if the field is not empty. Allowlist the following URLs for optional third-party content: URL Port Function registry.connect.redhat.com 443 Required for all third-party images and certified operators. rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com 443 Provides access to container images hosted on registry.connect.redhat.com oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com 443 Required for Sonatype Nexus, F5 Big IP operators. If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs: 1.rhel.pool.ntp.org 2.rhel.pool.ntp.org 3.rhel.pool.ntp.org Note If you do not use a default Red Hat NTP server, verify the NTP server for your platform and allow it in your firewall.
[ "curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane", "curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane", "chmod +x butane", "echo USDPATH", "butane <butane_file>", "variant: openshift version: 4.11.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-custom.bu -o ./99-worker-custom.yaml", "oc create -f 99-worker-custom.yaml", "./openshift-install create manifests --dir <installation_directory>", "cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF", "subscription-manager register", "subscription-manager attach --auto", "yum install podman make git -y", "mkdir kmods; cd kmods", "git clone https://github.com/kmods-via-containers/kmods-via-containers", "cd kmods-via-containers/", "sudo make install", "sudo systemctl daemon-reload", "cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod", "cd kvc-simple-kmod", "cat simple-kmod.conf", "KMOD_CONTAINER_BUILD_CONTEXT=\"https://github.com/kmods-via-containers/kvc-simple-kmod.git\" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES=\"simple-kmod simple-procfs-kmod\"", "sudo make install", "sudo kmods-via-containers build simple-kmod USD(uname -r)", "sudo systemctl enable [email protected] --now", "sudo systemctl status [email protected]", "● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago", "lsmod | grep simple_", "simple_procfs_kmod 16384 0 simple_kmod 16384 0", "dmesg | grep 'Hello world'", "[ 6420.761332] Hello world from simple_kmod.", "sudo cat /proc/simple-procfs-kmod", "simple-procfs-kmod number = 0", "sudo spkut 44", "KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44", "subscription-manager register", "subscription-manager attach --auto", "yum install podman make git -y", "mkdir kmods; cd kmods", "git clone https://github.com/kmods-via-containers/kmods-via-containers", "git clone https://github.com/kmods-via-containers/kvc-simple-kmod", "FAKEROOT=USD(mktemp -d)", "cd kmods-via-containers", "make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/", "cd ../kvc-simple-kmod", "make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/", "cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree", "variant: openshift version: 4.11.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true", "butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml", "oc create -f 99-simple-kmod.yaml", "lsmod | grep simple_", "simple_procfs_kmod 16384 0 simple_kmod 16384 0", "variant: openshift version: 4.11.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF threshold: 2 4 openshift: fips: true", "sudo yum install clevis", "clevis-encrypt-tang '{\"url\":\"http://tang.example.com:7500\"}' < /dev/null > /dev/null 1", "The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1", "./openshift-install create manifests --dir <installation_directory> 1", "variant: openshift version: 4.11.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 threshold: 1 9 mirror: 10 devices: 11 - /dev/sda - /dev/sdb openshift: fips: true 12", "butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml", "oc debug node/compute-1", "chroot /host", "cryptsetup status root", "/dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write", "clevis luks list -d /dev/sda4 1", "1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://tang.example.com:7500\"}]}}' 1", "cat /proc/mdstat", "Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none>", "mdadm --detail /dev/md126", "/dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8", "mount | grep /dev/md", "/dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel)", "variant: openshift version: 4.11.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/sda - /dev/sdb storage: disks: - device: /dev/sda partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/sdb partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true", "variant: openshift version: 4.11.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true", "butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1", "variant: openshift version: 4.11.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-chrony.bu -o 99-worker-chrony.yaml", "oc apply -f ./99-worker-chrony.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/installing/installation-configuration
Chapter 5. Compliance Operator
Chapter 5. Compliance Operator 5.1. Compliance Operator overview The OpenShift Container Platform Compliance Operator assists users by automating the inspection of numerous technical implementations and compares those against certain aspects of industry standards, benchmarks, and baselines; the Compliance Operator is not an auditor. In order to be compliant or certified under these various standards, you need to engage an authorized auditor such as a Qualified Security Assessor (QSA), Joint Authorization Board (JAB), or other industry recognized regulatory authority to assess your environment. The Compliance Operator makes recommendations based on generally available information and practices regarding such standards and may assist with remediations, but actual compliance is your responsibility. You are required to work with an authorized auditor to achieve compliance with a standard. For the latest updates, see the Compliance Operator release notes . For more information on compliance support for all Red Hat products, see Product Compliance . Compliance Operator concepts Understanding the Compliance Operator Understanding the Custom Resource Definitions Compliance Operator management Installing the Compliance Operator Updating the Compliance Operator Managing the Compliance Operator Uninstalling the Compliance Operator Compliance Operator scan management Supported compliance profiles Compliance Operator scans Tailoring the Compliance Operator Retrieving Compliance Operator raw results Managing Compliance Operator remediation Performing advanced Compliance Operator tasks Troubleshooting the Compliance Operator Using the oc-compliance plugin 5.2. Compliance Operator release notes The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. These release notes track the development of the Compliance Operator in the OpenShift Container Platform. For an overview of the Compliance Operator, see Understanding the Compliance Operator . To access the latest release, see Updating the Compliance Operator . For more information on compliance support for all Red Hat products, see Product Compliance . 5.2.1. OpenShift Compliance Operator 1.6.2 The following advisory is available for the OpenShift Compliance Operator 1.6.2: RHBA-2025:2659 - OpenShift Compliance Operator 1.6.2 update CVE-2024-45338 is resolved in the Compliance Operator 1.6.2 release. ( CVE-2024-45338 ) 5.2.2. OpenShift Compliance Operator 1.6.1 The following advisory is available for the OpenShift Compliance Operator 1.6.1: RHBA-2024:10367 - OpenShift Compliance Operator 1.6.1 update This update includes upgraded dependencies in underlying base images. 5.2.3. OpenShift Compliance Operator 1.6.0 The following advisory is available for the OpenShift Compliance Operator 1.6.0: RHBA-2024:6761 - OpenShift Compliance Operator 1.6.0 bug fix and enhancement update 5.2.3.1. New features and enhancements The Compliance Operator now contains supported profiles for Payment Card Industry Data Security Standard (PCI-DSS) version 4. For more information, see Supported compliance profiles . The Compliance Operator now contains supported profiles for Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) V2R1. For more information, see Supported compliance profiles . A must-gather extension is now available for the Compliance Operator installed on x86 , ppc64le , and s390x architectures. The must-gather tool provides crucial configuration details to Red Hat Customer Support and engineering. For more information, see Using the must-gather tool for the Compliance Operator . 5.2.3.2. Bug fixes Before this release, a misleading description in the ocp4-route-ip-whitelist rule resulted in misunderstanding, causing potential for misconfigurations. With this update, the rule is now more clearly defined. ( CMP-2485 ) Previously, the reporting of all of the ComplianceCheckResults for a DONE status ComplianceScan was incomplete. With this update, annotation has been added to report the number of total ComplianceCheckResults for a ComplianceScan with a DONE status. ( CMP-2615 ) Previously, the ocp4-cis-scc-limit-container-allowed-capabilities rule description contained ambiguous guidelines, leading to confusion among users. With this update, the rule description and actionable steps are clarified. ( OCPBUGS-17828 ) Before this update, sysctl configurations caused certain auto remediations for RHCOS4 rules to fail scans in affected clusters. With this update, the correct sysctl settings are applied and RHCOS4 rules for FedRAMP High profiles pass scans correctly. ( OCPBUGS-19690 ) Before this update, an issue with a jq filter caused errors with the rhacs-operator-controller-manager deployment during compliance checks. With this update, the jq filter expression is updated and the rhacs-operator-controller-manager deployment is exempt from compliance checks pertaining to container resource limits, eliminating false positive results. ( OCPBUGS-19690 ) Before this update, rhcos4-high and rhcos4-moderate profiles checked values of an incorrectly titled configuration file. As a result, some scan checks could fail. With this update, the rhcos4 profiles now check the correct configuration file and scans pass correctly. ( OCPBUGS-31674 ) Previously, the accessokenInactivityTimeoutSeconds variable used in the oauthclient-inactivity-timeout rule was immutable, leading to a FAIL status when performing DISA STIG scans. With this update, proper enforcement of the accessTokenInactivityTimeoutSeconds variable operates correctly and a PASS status is now possible. ( OCPBUGS-32551 ) Before this update, some annotations for rules were not updated, displaying the incorrect control standards. With this update, annotations for rules are updated correctly, ensuring the correct control standards are displayed. ( OCPBUGS-34982 ) Previously, when upgrading to Compliance Operator 1.5.1, an incorrectly referenced secret in a ServiceMonitor configuration caused integration issues with the Prometheus Operator. With this update, the Compliance Operator will accurately reference the secret containing the token for ServiceMonitor metrics. ( OCPBUGS-39417 ) 5.2.4. OpenShift Compliance Operator 1.5.1 The following advisory is available for the OpenShift Compliance Operator 1.5.1: RHBA-2024:5956 - OpenShift Compliance Operator 1.5.1 bug fix and enhancement update 5.2.5. OpenShift Compliance Operator 1.5.0 The following advisory is available for the OpenShift Compliance Operator 1.5.0: RHBA-2024:3533 - OpenShift Compliance Operator 1.5.0 bug fix and enhancement update 5.2.5.1. New features and enhancements With this update, the Compliance Operator provides a unique profile ID for easier programmatic use. ( CMP-2450 ) With this release, the Compliance Operator is now tested and supported on the ROSA HCP environment. The Compliance Operator loads only Node profiles when running on ROSA HCP. This is because a Red Hat managed platform restricts access to the control plane, which makes Platform profiles irrelevant to the operator's function.( CMP-2581 ) 5.2.5.2. Bug fixes CVE-2024-2961 is resolved in the Compliance Operator 1.5.0 release. ( CVE-2024-2961 ) Previously, for ROSA HCP systems, profile listings were incorrect. This update allows the Compliance Operator to provide correct profile output. ( OCPBUGS-34535 ) With this release, namespaces can be excluded from the ocp4-configure-network-policies-namespaces check by setting the ocp4-var-network-policies-namespaces-exempt-regex variable in the tailored profile. ( CMP-2543 ) 5.2.6. OpenShift Compliance Operator 1.4.1 The following advisory is available for the OpenShift Compliance Operator 1.4.1: RHBA-2024:1830 - OpenShift Compliance Operator bug fix and enhancement update 5.2.6.1. New features and enhancements As of this release, the Compliance Operator now provides the CIS OpenShift 1.5.0 profile rules. ( CMP-2447 ) With this update, the Compliance Operator now provides OCP4 STIG ID and SRG with the profile rules. ( CMP-2401 ) With this update, obsolete rules being applied to s390x have been removed. ( CMP-2471 ) 5.2.6.2. Bug fixes Previously, for Red Hat Enterprise Linux CoreOS (RHCOS) systems using Red Hat Enterprise Linux (RHEL) 9, application of the ocp4-kubelet-enable-protect-kernel-sysctl-file-exist rule failed. This update replaces the rule with ocp4-kubelet-enable-protect-kernel-sysctl . Now, after auto remediation is applied, RHEL 9-based RHCOS systems will show PASS upon the application of this rule. ( OCPBUGS-13589 ) Previously, after applying compliance remediations using profile rhcos4-e8 , the nodes were no longer accessible using SSH to the core user account. With this update, nodes remain accessible through SSH using the `sshkey1 option. ( OCPBUGS-18331 ) Previously, the STIG profile was missing rules from CaC that fulfill requirements on the published STIG for OpenShift Container Platform. With this update, upon remediation, the cluster satisfies STIG requirements that can be remediated using Compliance Operator. ( OCPBUGS-26193 ) Previously, creating a ScanSettingBinding object with profiles of different types for multiple products bypassed a restriction against multiple products types in a binding. With this update, the product validation now allows multiple products regardless of the of profile types in the ScanSettingBinding object. ( OCPBUGS-26229 ) Previously, running the rhcos4-service-debug-shell-disabled rule showed as FAIL even after auto-remediation was applied. With this update, running the rhcos4-service-debug-shell-disabled rule now shows PASS after auto-remediation is applied. ( OCPBUGS-28242 ) With this update, instructions for the use of the rhcos4-banner-etc-issue rule are enhanced to provide more detail. ( OCPBUGS-28797 ) Previously the api_server_api_priority_flowschema_catch_all rule provided FAIL status on OpenShift Container Platform 4.16 clusters. With this update, the api_server_api_priority_flowschema_catch_all rule provides PASS status on OpenShift Container Platform 4.16 clusters. ( OCPBUGS-28918 ) Previously, when a profile was removed from a completed scan shown in a ScanSettingBinding (SSB) object, the Compliance Operator did not remove the old scan. Afterward, when launching a new SSB using the deleted profile, the Compliance Operator failed to update the result. With this release of the Compliance Operator, the new SSB now shows the new compliance check result. ( OCPBUGS-29272 ) Previously, on ppc64le architecture, the metrics service was not created. With this update, when deploying the Compliance Operator v1.4.1 on ppc64le architecture, the metrics service is now created correctly. ( OCPBUGS-32797 ) Previously, on a HyperShift hosted cluster, a scan with the ocp4-pci-dss profile will run into an unrecoverable error due to a filter cannot iterate issue. With this release, the scan for the ocp4-pci-dss profile will reach done status and return either a Compliance or Non-Compliance test result. ( OCPBUGS-33067 ) 5.2.7. OpenShift Compliance Operator 1.4.0 The following advisory is available for the OpenShift Compliance Operator 1.4.0: RHBA-2023:7658 - OpenShift Compliance Operator bug fix and enhancement update 5.2.7.1. New features and enhancements With this update, clusters which use custom node pools outside the default worker and master node pools no longer need to supply additional variables to ensure Compliance Operator aggregates the configuration file for that node pool. Users can now pause scan schedules by setting the ScanSetting.suspend attribute to True . This allows users to suspend a scan schedule and reactivate it without the need to delete and re-create the ScanSettingBinding . This simplifies pausing scan schedules during maintenance periods. ( CMP-2123 ) Compliance Operator now supports an optional version attribute on Profile custom resources. ( CMP-2125 ) Compliance Operator now supports profile names in ComplianceRules . ( CMP-2126 ) Compliance Operator compatibility with improved cronjob API improvements is available in this release. ( CMP-2310 ) 5.2.7.2. Bug fixes Previously, on a cluster with Windows nodes, some rules will FAIL after auto remediation is applied because the Windows nodes were not skipped by the compliance scan. With this release, Windows nodes are correctly skipped when scanning. ( OCPBUGS-7355 ) With this update, rprivate default mount propagation is now handled correctly for root volume mounts of pods that rely on multipathing. ( OCPBUGS-17494 ) Previously, the Compliance Operator would generate a remediation for coreos_vsyscall_kernel_argument without reconciling the rule even while applying the remediation. With release 1.4.0, the coreos_vsyscall_kernel_argument rule properly evaluates kernel arguments and generates an appropriate remediation.( OCPBUGS-8041 ) Before this update, rule rhcos4-audit-rules-login-events-faillock would fail even after auto-remediation has been applied. With this update, rhcos4-audit-rules-login-events-faillock failure locks are now applied correctly after auto-remediation. ( OCPBUGS-24594 ) Previously, upgrades from Compliance Operator 1.3.1 to Compliance Operator 1.4.0 would cause OVS rules scan results to go from PASS to NOT-APPLICABLE . With this update, OVS rules scan results now show PASS ( OCPBUGS-25323 ) 5.2.8. OpenShift Compliance Operator 1.3.1 The following advisory is available for the OpenShift Compliance Operator 1.3.1: RHBA-2023:5669 - OpenShift Compliance Operator bug fix and enhancement update This update addresses a CVE in an underlying dependency. Important It is recommended to update the Compliance Operator to version 1.3.1 or later before updating your OpenShift Container Platform cluster to version 4.14 or later. 5.2.8.1. New features and enhancements You can install and use the Compliance Operator in an OpenShift Container Platform cluster running in FIPS mode. Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . 5.2.8.2. Known issue On a cluster with Windows nodes, some rules will FAIL after auto remediation is applied because the Windows nodes are not skipped by the compliance scan. This differs from the expected results because the Windows nodes must be skipped when scanning. ( OCPBUGS-7355 ) 5.2.9. OpenShift Compliance Operator 1.3.0 The following advisory is available for the OpenShift Compliance Operator 1.3.0: RHBA-2023:5102 - OpenShift Compliance Operator enhancement update 5.2.9.1. New features and enhancements The Defense Information Systems Agency Security Technical Implementation Guide (DISA-STIG) for OpenShift Container Platform is now available from Compliance Operator 1.3.0. See Supported compliance profiles for additional information. Compliance Operator 1.3.0 now supports IBM Power(R) and IBM Z(R) for NIST 800-53 Moderate-Impact Baseline for OpenShift Container Platform platform and node profiles. 5.2.10. OpenShift Compliance Operator 1.2.0 The following advisory is available for the OpenShift Compliance Operator 1.2.0: RHBA-2023:4245 - OpenShift Compliance Operator enhancement update 5.2.10.1. New features and enhancements The CIS OpenShift Container Platform 4 Benchmark v1.4.0 profile is now available for platform and node applications. To locate the CIS OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and click Download Latest CIS Benchmark , where you can then register to download the benchmark. Important Upgrading to Compliance Operator 1.2.0 will overwrite the CIS OpenShift Container Platform 4 Benchmark 1.1.0 profiles. If your OpenShift Container Platform environment contains existing cis and cis-node remediations, there might be some differences in scan results after upgrading to Compliance Operator 1.2.0. Additional clarity for auditing security context constraints (SCCs) is now available for the scc-limit-container-allowed-capabilities rule. 5.2.11. OpenShift Compliance Operator 1.1.0 The following advisory is available for the OpenShift Compliance Operator 1.1.0: RHBA-2023:3630 - OpenShift Compliance Operator bug fix and enhancement update 5.2.11.1. New features and enhancements A start and end timestamp is now available in the ComplianceScan custom resource definition (CRD) status. The Compliance Operator can now be deployed on hosted control planes using the OperatorHub by creating a Subscription file. For more information, see Installing the Compliance Operator on hosted control planes . 5.2.11.2. Bug fixes Before this update, some Compliance Operator rule instructions were not present. After this update, instructions are improved for the following rules: classification_banner oauth_login_template_set oauth_logout_url_set oauth_provider_selection_set ocp_allowed_registries ocp_allowed_registries_for_import ( OCPBUGS-10473 ) Before this update, check accuracy and rule instructions were unclear. After this update, the check accuracy and instructions are improved for the following sysctl rules: kubelet-enable-protect-kernel-sysctl kubelet-enable-protect-kernel-sysctl-kernel-keys-root-maxbytes kubelet-enable-protect-kernel-sysctl-kernel-keys-root-maxkeys kubelet-enable-protect-kernel-sysctl-kernel-panic kubelet-enable-protect-kernel-sysctl-kernel-panic-on-oops kubelet-enable-protect-kernel-sysctl-vm-overcommit-memory kubelet-enable-protect-kernel-sysctl-vm-panic-on-oom ( OCPBUGS-11334 ) Before this update, the ocp4-alert-receiver-configured rule did not include instructions. With this update, the ocp4-alert-receiver-configured rule now includes improved instructions. ( OCPBUGS-7307 ) Before this update, the rhcos4-sshd-set-loglevel-info rule would fail for the rhcos4-e8 profile. With this update, the remediation for the sshd-set-loglevel-info rule was updated to apply the correct configuration changes, allowing subsequent scans to pass after the remediation is applied. ( OCPBUGS-7816 ) Before this update, a new installation of OpenShift Container Platform with the latest Compliance Operator install failed on the scheduler-no-bind-address rule. With this update, the scheduler-no-bind-address rule has been disabled on newer versions of OpenShift Container Platform since the parameter was removed. ( OCPBUGS-8347 ) 5.2.12. OpenShift Compliance Operator 1.0.0 The following advisory is available for the OpenShift Compliance Operator 1.0.0: RHBA-2023:1682 - OpenShift Compliance Operator bug fix update 5.2.12.1. New features and enhancements The Compliance Operator is now stable and the release channel is upgraded to stable . Future releases will follow Semantic Versioning . To access the latest release, see Updating the Compliance Operator . 5.2.12.2. Bug fixes Before this update, the compliance_operator_compliance_scan_error_total metric had an ERROR label with a different value for each error message. With this update, the compliance_operator_compliance_scan_error_total metric does not increase in values. ( OCPBUGS-1803 ) Before this update, the ocp4-api-server-audit-log-maxsize rule would result in a FAIL state. With this update, the error message has been removed from the metric, decreasing the cardinality of the metric in line with best practices. ( OCPBUGS-7520 ) Before this update, the rhcos4-enable-fips-mode rule description was misleading that FIPS could be enabled after installation. With this update, the rhcos4-enable-fips-mode rule description clarifies that FIPS must be enabled at install time. ( OCPBUGS-8358 ) 5.2.13. OpenShift Compliance Operator 0.1.61 The following advisory is available for the OpenShift Compliance Operator 0.1.61: RHBA-2023:0557 - OpenShift Compliance Operator bug fix update 5.2.13.1. New features and enhancements The Compliance Operator now supports timeout configuration for Scanner Pods. The timeout is specified in the ScanSetting object. If the scan is not completed within the timeout, the scan retries until the maximum number of retries is reached. See Configuring ScanSetting timeout for more information. 5.2.13.2. Bug fixes Before this update, Compliance Operator remediations required variables as inputs. Remediations without variables set were applied cluster-wide and resulted in stuck nodes, even though it appeared the remediation applied correctly. With this update, the Compliance Operator validates if a variable needs to be supplied using a TailoredProfile for a remediation. ( OCPBUGS-3864 ) Before this update, the instructions for ocp4-kubelet-configure-tls-cipher-suites were incomplete, requiring users to refine the query manually. With this update, the query provided in ocp4-kubelet-configure-tls-cipher-suites returns the actual results to perform the audit steps. ( OCPBUGS-3017 ) Before this update, system reserved parameters were not generated in kubelet configuration files, causing the Compliance Operator to fail to unpause the machine config pool. With this update, the Compliance Operator omits system reserved parameters during machine configuration pool evaluation. ( OCPBUGS-4445 ) Before this update, ComplianceCheckResult objects did not have correct descriptions. With this update, the Compliance Operator sources the ComplianceCheckResult information from the rule description. ( OCPBUGS-4615 ) Before this update, the Compliance Operator did not check for empty kubelet configuration files when parsing machine configurations. As a result, the Compliance Operator would panic and crash. With this update, the Compliance Operator implements improved checking of the kubelet configuration data structure and only continues if it is fully rendered. ( OCPBUGS-4621 ) Before this update, the Compliance Operator generated remediations for kubelet evictions based on machine config pool name and a grace period, resulting in multiple remediations for a single eviction rule. With this update, the Compliance Operator applies all remediations for a single rule. ( OCPBUGS-4338 ) Before this update, a regression occurred when attempting to create a ScanSettingBinding that was using a TailoredProfile with a non-default MachineConfigPool marked the ScanSettingBinding as Failed . With this update, functionality is restored and custom ScanSettingBinding using a TailoredProfile performs correctly. ( OCPBUGS-6827 ) Before this update, some kubelet configuration parameters did not have default values. With this update, the following parameters contain default values ( OCPBUGS-6708 ): ocp4-cis-kubelet-enable-streaming-connections ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available Before this update, the selinux_confinement_of_daemons rule failed running on the kubelet because of the permissions necessary for the kubelet to run. With this update, the selinux_confinement_of_daemons rule is disabled. ( OCPBUGS-6968 ) 5.2.14. OpenShift Compliance Operator 0.1.59 The following advisory is available for the OpenShift Compliance Operator 0.1.59: RHBA-2022:8538 - OpenShift Compliance Operator bug fix update 5.2.14.1. New features and enhancements The Compliance Operator now supports Payment Card Industry Data Security Standard (PCI-DSS) ocp4-pci-dss and ocp4-pci-dss-node profiles on the ppc64le architecture. 5.2.14.2. Bug fixes Previously, the Compliance Operator did not support the Payment Card Industry Data Security Standard (PCI DSS) ocp4-pci-dss and ocp4-pci-dss-node profiles on different architectures such as ppc64le . Now, the Compliance Operator supports ocp4-pci-dss and ocp4-pci-dss-node profiles on the ppc64le architecture. ( OCPBUGS-3252 ) Previously, after the recent update to version 0.1.57, the rerunner service account (SA) was no longer owned by the cluster service version (CSV), which caused the SA to be removed during the Operator upgrade. Now, the CSV owns the rerunner SA in 0.1.59, and upgrades from any version will not result in a missing SA. ( OCPBUGS-3452 ) 5.2.15. OpenShift Compliance Operator 0.1.57 The following advisory is available for the OpenShift Compliance Operator 0.1.57: RHBA-2022:6657 - OpenShift Compliance Operator bug fix update 5.2.15.1. New features and enhancements KubeletConfig checks changed from Node to Platform type. KubeletConfig checks the default configuration of the KubeletConfig . The configuration files are aggregated from all nodes into a single location per node pool. See Evaluating KubeletConfig rules against default configuration values . The ScanSetting Custom Resource now allows users to override the default CPU and memory limits of scanner pods through the scanLimits attribute. For more information, see Increasing Compliance Operator resource limits . A PriorityClass object can now be set through ScanSetting . This ensures the Compliance Operator is prioritized and minimizes the chance that the cluster falls out of compliance. For more information, see Setting PriorityClass for ScanSetting scans . 5.2.15.2. Bug fixes Previously, the Compliance Operator hard-coded notifications to the default openshift-compliance namespace. If the Operator were installed in a non-default namespace, the notifications would not work as expected. Now, notifications work in non-default openshift-compliance namespaces. ( BZ#2060726 ) Previously, the Compliance Operator was unable to evaluate default configurations used by kubelet objects, resulting in inaccurate results and false positives. This new feature evaluates the kubelet configuration and now reports accurately. ( BZ#2075041 ) Previously, the Compliance Operator reported the ocp4-kubelet-configure-event-creation rule in a FAIL state after applying an automatic remediation because the eventRecordQPS value was set higher than the default value. Now, the ocp4-kubelet-configure-event-creation rule remediation sets the default value, and the rule applies correctly. ( BZ#2082416 ) The ocp4-configure-network-policies rule requires manual intervention to perform effectively. New descriptive instructions and rule updates increase applicability of the ocp4-configure-network-policies rule for clusters using Calico CNIs. ( BZ#2091794 ) Previously, the Compliance Operator would not clean up pods used to scan infrastructure when using the debug=true option in the scan settings. This caused pods to be left on the cluster even after deleting the ScanSettingBinding . Now, pods are always deleted when a ScanSettingBinding is deleted.( BZ#2092913 ) Previously, the Compliance Operator used an older version of the operator-sdk command that caused alerts about deprecated functionality. Now, an updated version of the operator-sdk command is included and there are no more alerts for deprecated functionality. ( BZ#2098581 ) Previously, the Compliance Operator would fail to apply remediations if it could not determine the relationship between kubelet and machine configurations. Now, the Compliance Operator has improved handling of the machine configurations and is able to determine if a kubelet configuration is a subset of a machine configuration. ( BZ#2102511 ) Previously, the rule for ocp4-cis-node-master-kubelet-enable-cert-rotation did not properly describe success criteria. As a result, the requirements for RotateKubeletClientCertificate were unclear. Now, the rule for ocp4-cis-node-master-kubelet-enable-cert-rotation reports accurately regardless of the configuration present in the kubelet configuration file. ( BZ#2105153 ) Previously, the rule for checking idle streaming timeouts did not consider default values, resulting in inaccurate rule reporting. Now, more robust checks ensure increased accuracy in results based on default configuration values. ( BZ#2105878 ) Previously, the Compliance Operator would fail to fetch API resources when parsing machine configurations without Ignition specifications, which caused the api-check-pods processes to crash loop. Now, the Compliance Operator handles Machine Config Pools that do not have Ignition specifications correctly. ( BZ#2117268 ) Previously, rules evaluating the modprobe configuration would fail even after applying remediations due to a mismatch in values for the modprobe configuration. Now, the same values are used for the modprobe configuration in checks and remediations, ensuring consistent results. ( BZ#2117747 ) 5.2.15.3. Deprecations Specifying Install into all namespaces in the cluster or setting the WATCH_NAMESPACES environment variable to "" no longer affects all namespaces. Any API resources installed in namespaces not specified at the time of Compliance Operator installation is no longer be operational. API resources might require creation in the selected namespace, or the openshift-compliance namespace by default. This change improves the Compliance Operator's memory usage. 5.2.16. OpenShift Compliance Operator 0.1.53 The following advisory is available for the OpenShift Compliance Operator 0.1.53: RHBA-2022:5537 - OpenShift Compliance Operator bug fix update 5.2.16.1. Bug fixes Previously, the ocp4-kubelet-enable-streaming-connections rule contained an incorrect variable comparison, resulting in false positive scan results. Now, the Compliance Operator provides accurate scan results when setting streamingConnectionIdleTimeout . ( BZ#2069891 ) Previously, group ownership for /etc/openvswitch/conf.db was incorrect on IBM Z(R) architectures, resulting in ocp4-cis-node-worker-file-groupowner-ovs-conf-db check failures. Now, the check is marked NOT-APPLICABLE on IBM Z(R) architecture systems. ( BZ#2072597 ) Previously, the ocp4-cis-scc-limit-container-allowed-capabilities rule reported in a FAIL state due to incomplete data regarding the security context constraints (SCC) rules in the deployment. Now, the result is MANUAL , which is consistent with other checks that require human intervention. ( BZ#2077916 ) Previously, the following rules failed to account for additional configuration paths for API servers and TLS certificates and keys, resulting in reported failures even if the certificates and keys were set properly: ocp4-cis-api-server-kubelet-client-cert ocp4-cis-api-server-kubelet-client-key ocp4-cis-kubelet-configure-tls-cert ocp4-cis-kubelet-configure-tls-key Now, the rules report accurately and observe legacy file paths specified in the kubelet configuration file. ( BZ#2079813 ) Previously, the content_rule_oauth_or_oauthclient_inactivity_timeout rule did not account for a configurable timeout set by the deployment when assessing compliance for timeouts. This resulted in the rule failing even if the timeout was valid. Now, the Compliance Operator uses the var_oauth_inactivity_timeout variable to set valid timeout length. ( BZ#2081952 ) Previously, the Compliance Operator used administrative permissions on namespaces not labeled appropriately for privileged use, resulting in warning messages regarding pod security-level violations. Now, the Compliance Operator has appropriate namespace labels and permission adjustments to access results without violating permissions. ( BZ#2088202 ) Previously, applying auto remediations for rhcos4-high-master-sysctl-kernel-yama-ptrace-scope and rhcos4-sysctl-kernel-core-pattern resulted in subsequent failures of those rules in scan results, even though they were remediated. Now, the rules report PASS accurately, even after remediations are applied.( BZ#2094382 ) Previously, the Compliance Operator would fail in a CrashLoopBackoff state because of out-of-memory exceptions. Now, the Compliance Operator is improved to handle large machine configuration data sets in memory and function correctly. ( BZ#2094854 ) 5.2.16.2. Known issue When "debug":true is set within the ScanSettingBinding object, the pods generated by the ScanSettingBinding object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods: USD oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis ( BZ#2092913 ) 5.2.17. OpenShift Compliance Operator 0.1.52 The following advisory is available for the OpenShift Compliance Operator 0.1.52: RHBA-2022:4657 - OpenShift Compliance Operator bug fix update 5.2.17.1. New features and enhancements The FedRAMP high SCAP profile is now available for use in OpenShift Container Platform environments. For more information, See Supported compliance profiles . 5.2.17.2. Bug fixes Previously, the OpenScap container would crash due to a mount permission issue in a security environment where DAC_OVERRIDE capability is dropped. Now, executable mount permissions are applied to all users. ( BZ#2082151 ) Previously, the compliance rule ocp4-configure-network-policies could be configured as MANUAL . Now, compliance rule ocp4-configure-network-policies is set to AUTOMATIC . ( BZ#2072431 ) Previously, the Cluster Autoscaler would fail to scale down because the Compliance Operator scan pods were never removed after a scan. Now, the pods are removed from each node by default unless explicitly saved for debugging purposes. ( BZ#2075029 ) Previously, applying the Compliance Operator to the KubeletConfig would result in the node going into a NotReady state due to unpausing the Machine Config Pools too early. Now, the Machine Config Pools are unpaused appropriately and the node operates correctly. ( BZ#2071854 ) Previously, the Machine Config Operator used base64 instead of url-encoded code in the latest release, causing Compliance Operator remediation to fail. Now, the Compliance Operator checks encoding to handle both base64 and url-encoded Machine Config code and the remediation applies correctly. ( BZ#2082431 ) 5.2.17.3. Known issue When "debug":true is set within the ScanSettingBinding object, the pods generated by the ScanSettingBinding object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods: USD oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis ( BZ#2092913 ) 5.2.18. OpenShift Compliance Operator 0.1.49 The following advisory is available for the OpenShift Compliance Operator 0.1.49: RHBA-2022:1148 - OpenShift Compliance Operator bug fix and enhancement update 5.2.18.1. New features and enhancements The Compliance Operator is now supported on the following architectures: IBM Power(R) IBM Z(R) IBM(R) LinuxONE 5.2.18.2. Bug fixes Previously, the openshift-compliance content did not include platform-specific checks for network types. As a result, OVN- and SDN-specific checks would show as failed instead of not-applicable based on the network configuration. Now, new rules contain platform checks for networking rules, resulting in a more accurate assessment of network-specific checks. ( BZ#1994609 ) Previously, the ocp4-moderate-routes-protected-by-tls rule incorrectly checked TLS settings that results in the rule failing the check, even if the connection secure SSL/TLS protocol. Now, the check properly evaluates TLS settings that are consistent with the networking guidance and profile recommendations. ( BZ#2002695 ) Previously, ocp-cis-configure-network-policies-namespace used pagination when requesting namespaces. This caused the rule to fail because the deployments truncated lists of more than 500 namespaces. Now, the entire namespace list is requested, and the rule for checking configured network policies works for deployments with more than 500 namespaces. ( BZ#2038909 ) Previously, remediations using the sshd jinja macros were hard-coded to specific sshd configurations. As a result, the configurations were inconsistent with the content the rules were checking for and the check would fail. Now, the sshd configuration is parameterized and the rules apply successfully. ( BZ#2049141 ) Previously, the ocp4-cluster-version-operator-verify-integrity always checked the first entry in the Cluter Version Operator (CVO) history. As a result, the upgrade would fail in situations where subsequent versions of OpenShift Container Platform would be verified. Now, the compliance check result for ocp4-cluster-version-operator-verify-integrity is able to detect verified versions and is accurate with the CVO history. ( BZ#2053602 ) Previously, the ocp4-api-server-no-adm-ctrl-plugins-disabled rule did not check for a list of empty admission controller plugins. As a result, the rule would always fail, even if all admission plugins were enabled. Now, more robust checking of the ocp4-api-server-no-adm-ctrl-plugins-disabled rule accurately passes with all admission controller plugins enabled. ( BZ#2058631 ) Previously, scans did not contain platform checks for running against Linux worker nodes. As a result, running scans against worker nodes that were not Linux-based resulted in a never ending scan loop. Now, the scan schedules appropriately based on platform type and labels complete successfully. ( BZ#2056911 ) 5.2.19. OpenShift Compliance Operator 0.1.48 The following advisory is available for the OpenShift Compliance Operator 0.1.48: RHBA-2022:0416 - OpenShift Compliance Operator bug fix and enhancement update 5.2.19.1. Bug fixes Previously, some rules associated with extended Open Vulnerability and Assessment Language (OVAL) definitions had a checkType of None . This was because the Compliance Operator was not processing extended OVAL definitions when parsing rules. With this update, content from extended OVAL definitions is parsed so that these rules now have a checkType of either Node or Platform . ( BZ#2040282 ) Previously, a manually created MachineConfig object for KubeletConfig prevented a KubeletConfig object from being generated for remediation, leaving the remediation in the Pending state. With this release, a KubeletConfig object is created by the remediation, regardless if there is a manually created MachineConfig object for KubeletConfig . As a result, KubeletConfig remediations now work as expected. ( BZ#2040401 ) 5.2.20. OpenShift Compliance Operator 0.1.47 The following advisory is available for the OpenShift Compliance Operator 0.1.47: RHBA-2022:0014 - OpenShift Compliance Operator bug fix and enhancement update 5.2.20.1. New features and enhancements The Compliance Operator now supports the following compliance benchmarks for the Payment Card Industry Data Security Standard (PCI DSS): ocp4-pci-dss ocp4-pci-dss-node Additional rules and remediations for FedRAMP moderate impact level are added to the OCP4-moderate, OCP4-moderate-node, and rhcos4-moderate profiles. Remediations for KubeletConfig are now available in node-level profiles. 5.2.20.2. Bug fixes Previously, if your cluster was running OpenShift Container Platform 4.6 or earlier, remediations for USBGuard-related rules would fail for the moderate profile. This is because the remediations created by the Compliance Operator were based on an older version of USBGuard that did not support drop-in directories. Now, invalid remediations for USBGuard-related rules are not created for clusters running OpenShift Container Platform 4.6. If your cluster is using OpenShift Container Platform 4.6, you must manually create remediations for USBGuard-related rules. Additionally, remediations are created only for rules that satisfy minimum version requirements. ( BZ#1965511 ) Previously, when rendering remediations, the compliance operator would check that the remediation was well-formed by using a regular expression that was too strict. As a result, some remediations, such as those that render sshd_config , would not pass the regular expression check and therefore, were not created. The regular expression was found to be unnecessary and removed. Remediations now render correctly. ( BZ#2033009 ) 5.2.21. OpenShift Compliance Operator 0.1.44 The following advisory is available for the OpenShift Compliance Operator 0.1.44: RHBA-2021:4530 - OpenShift Compliance Operator bug fix and enhancement update 5.2.21.1. New features and enhancements In this release, the strictNodeScan option is now added to the ComplianceScan , ComplianceSuite and ScanSetting CRs. This option defaults to true which matches the behavior, where an error occurred if a scan was not able to be scheduled on a node. Setting the option to false allows the Compliance Operator to be more permissive about scheduling scans. Environments with ephemeral nodes can set the strictNodeScan value to false, which allows a compliance scan to proceed, even if some of the nodes in the cluster are not available for scheduling. You can now customize the node that is used to schedule the result server workload by configuring the nodeSelector and tolerations attributes of the ScanSetting object. These attributes are used to place the ResultServer pod, the pod that is used to mount a PV storage volume and store the raw Asset Reporting Format (ARF) results. Previously, the nodeSelector and the tolerations parameters defaulted to selecting one of the control plane nodes and tolerating the node-role.kubernetes.io/master taint . This did not work in environments where control plane nodes are not permitted to mount PVs. This feature provides a way for you to select the node and tolerate a different taint in those environments. The Compliance Operator can now remediate KubeletConfig objects. A comment containing an error message is now added to help content developers differentiate between objects that do not exist in the cluster compared to objects that cannot be fetched. Rule objects now contain two new attributes, checkType and description . These attributes allow you to determine if the rule pertains to a node check or platform check, and also allow you to review what the rule does. This enhancement removes the requirement that you have to extend an existing profile to create a tailored profile. This means the extends field in the TailoredProfile CRD is no longer mandatory. You can now select a list of rule objects to create a tailored profile. Note that you must select whether your profile applies to nodes or the platform by setting the compliance.openshift.io/product-type: annotation or by setting the -node suffix for the TailoredProfile CR. In this release, the Compliance Operator is now able to schedule scans on all nodes irrespective of their taints. Previously, the scan pods would only tolerated the node-role.kubernetes.io/master taint , meaning that they would either ran on nodes with no taints or only on nodes with the node-role.kubernetes.io/master taint. In deployments that use custom taints for their nodes, this resulted in the scans not being scheduled on those nodes. Now, the scan pods tolerate all node taints. In this release, the Compliance Operator supports the following North American Electric Reliability Corporation (NERC) security profiles: ocp4-nerc-cip ocp4-nerc-cip-node rhcos4-nerc-cip In this release, the Compliance Operator supports the NIST 800-53 Moderate-Impact Baseline for the Red Hat OpenShift - Node level, ocp4-moderate-node, security profile. 5.2.21.2. Templating and variable use In this release, the remediation template now allows multi-value variables. With this update, the Compliance Operator can change remediations based on variables that are set in the compliance profile. This is useful for remediations that include deployment-specific values such as time outs, NTP server host names, or similar. Additionally, the ComplianceCheckResult objects now use the label compliance.openshift.io/check-has-value that lists the variables a check has used. 5.2.21.3. Bug fixes Previously, while performing a scan, an unexpected termination occurred in one of the scanner containers of the pods. In this release, the Compliance Operator uses the latest OpenSCAP version 1.3.5 to avoid a crash. Previously, using autoReplyRemediations to apply remediations triggered an update of the cluster nodes. This was disruptive if some of the remediations did not include all of the required input variables. Now, if a remediation is missing one or more required input variables, it is assigned a state of NeedsReview . If one or more remediations are in a NeedsReview state, the machine config pool remains paused, and the remediations are not applied until all of the required variables are set. This helps minimize disruption to the nodes. The RBAC Role and Role Binding used for Prometheus metrics are changed to 'ClusterRole' and 'ClusterRoleBinding' to ensure that monitoring works without customization. Previously, if an error occurred while parsing a profile, rules or variables objects were removed and deleted from the profile. Now, if an error occurs during parsing, the profileparser annotates the object with a temporary annotation that prevents the object from being deleted until after parsing completes. ( BZ#1988259 ) Previously, an error occurred if titles or descriptions were missing from a tailored profile. Because the XCCDF standard requires titles and descriptions for tailored profiles, titles and descriptions are now required to be set in TailoredProfile CRs. Previously, when using tailored profiles, TailoredProfile variable values were allowed to be set using only a specific selection set. This restriction is now removed, and TailoredProfile variables can be set to any value. 5.2.22. Release Notes for Compliance Operator 0.1.39 The following advisory is available for the OpenShift Compliance Operator 0.1.39: RHBA-2021:3214 - OpenShift Compliance Operator bug fix and enhancement update 5.2.22.1. New features and enhancements Previously, the Compliance Operator was unable to parse Payment Card Industry Data Security Standard (PCI DSS) references. Now, the Operator can parse compliance content that is provided with PCI DSS profiles. Previously, the Compliance Operator was unable to execute rules for AU-5 control in the moderate profile. Now, permission is added to the Operator so that it can read Prometheusrules.monitoring.coreos.com objects and run the rules that cover AU-5 control in the moderate profile. 5.2.23. Additional resources Understanding the Compliance Operator 5.3. Compliance Operator support 5.3.1. Compliance Operator lifecycle The Compliance Operator is a "Rolling Stream" Operator, meaning updates are available asynchronously of OpenShift Container Platform releases. For more information, see OpenShift Operator Life Cycles on the Red Hat Customer Portal. 5.3.2. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 5.3.3. Using the must-gather tool for the Compliance Operator Starting in Compliance Operator v1.6.0, you can collect data about the Compliance Operator resources by running the must-gather command with the Compliance Operator image. Note Consider using the must-gather tool when opening support cases or filing bug reports, as it provides additional details about the Operator configuration and logs. Procedure Run the following command to collect data about the Compliance Operator: USD oc adm must-gather --image=USD(oc get csv compliance-operator.v1.6.0 -o=jsonpath='{.spec.relatedImages[?(@.name=="must-gather")].image}') 5.3.4. Additional resources About the must-gather tool Product Compliance 5.4. Compliance Operator concepts 5.4.1. Understanding the Compliance Operator The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. The Compliance Operator assesses compliance of both the Kubernetes API resources of OpenShift Container Platform, as well as the nodes running the cluster. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies provided by the content. Important The Compliance Operator is available for Red Hat Enterprise Linux CoreOS (RHCOS) deployments only. 5.4.1.1. Compliance Operator profiles There are several profiles available as part of the Compliance Operator installation. You can use the oc get command to view available profiles, profile details, and specific rules. View the available profiles: USD oc get profile.compliance -n openshift-compliance Example output NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1 These profiles represent different compliance benchmarks. Each profile has the product name that it applies to added as a prefix to the profile's name. ocp4-e8 applies the Essential 8 benchmark to the OpenShift Container Platform product, while rhcos4-e8 applies the Essential 8 benchmark to the Red Hat Enterprise Linux CoreOS (RHCOS) product. Run the following command to view the details of the rhcos4-e8 profile: USD oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8 Example 5.1. Example output apiVersion: compliance.openshift.io/v1alpha1 description: 'This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers' id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node creationTimestamp: "2022-10-19T12:06:49Z" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: "43699" uid: 86353f70-28f7-40b4-bf0e-6289ec33675b rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight Run the following command to view the details of the rhcos4-audit-rules-login-events rule: USD oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events Example 5.2. Example output apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS creationTimestamp: "2022-10-19T12:07:08Z" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: "44819" uid: 75872f1f-3c93-40ca-a69d-44e5438824a4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. 5.4.1.1.1. Compliance Operator profile types There are two types of compliance profiles available: Platform and Node. Platform Platform scans target your OpenShift Container Platform cluster. Node Node scans target the nodes of the cluster. Important For compliance profiles that have Node and Platform applications, such as pci-dss compliance profiles, you must run both in your OpenShift Container Platform environment. 5.4.1.2. Additional resources Supported compliance profiles 5.4.2. Understanding the Custom Resource Definitions The Compliance Operator in the OpenShift Container Platform provides you with several Custom Resource Definitions (CRDs) to accomplish the compliance scans. To run a compliance scan, it leverages the predefined security policies, which are derived from the ComplianceAsCode community project. The Compliance Operator converts these security policies into CRDs, which you can use to run compliance scans and get remediations for the issues found. 5.4.2.1. CRDs workflow The CRD provides you the following workflow to complete the compliance scans: Define your compliance scan requirements Configure the compliance scan settings Process compliance requirements with compliance scans settings Monitor the compliance scans Check the compliance scan results 5.4.2.2. Defining the compliance scan requirements By default, the Compliance Operator CRDs include ProfileBundle and Profile objects, in which you can define and set the rules for your compliance scan requirements. You can also customize the default profiles by using a TailoredProfile object. 5.4.2.2.1. ProfileBundle object When you install the Compliance Operator, it includes ready-to-run ProfileBundle objects. The Compliance Operator parses the ProfileBundle object and creates a Profile object for each profile in the bundle. It also parses Rule and Variable objects, which are used by the Profile object. Example ProfileBundle object apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance status: dataStreamStatus: VALID 1 1 Indicates whether the Compliance Operator was able to parse the content files. Note When the contentFile fails, an errorMessage attribute appears, which provides details of the error that occurred. Troubleshooting When you roll back to a known content image from an invalid image, the ProfileBundle object stops responding and displays PENDING state. As a workaround, you can move to a different image than the one. Alternatively, you can delete and re-create the ProfileBundle object to return to the working state. 5.4.2.2.2. Profile object The Profile object defines the rules and variables that can be evaluated for a certain compliance standard. It contains parsed out details about an OpenSCAP profile, such as its XCCDF identifier and profile checks for a Node or Platform type. You can either directly use the Profile object or further customize it using a TailorProfile object. Note You cannot create or modify the Profile object manually because it is derived from a single ProfileBundle object. Typically, a single ProfileBundle object can include several Profile objects. Example Profile object apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: "YYYY-MM-DDTMM:HH:SSZ" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: "<version number>" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile> 1 Specify the XCCDF name of the profile. Use this identifier when you define a ComplianceScan object as the value of the profile attribute of the scan. 2 Specify either a Node or Platform . Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform. 3 Specify the list of rules for the profile. Each rule corresponds to a single check. 5.4.2.2.3. Rule object The Rule object, which forms the profiles, are also exposed as objects. Use the Rule object to define your compliance check requirements and specify how it could be fixed. Example Rule object apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule> 1 Specify the type of check this rule executes. Node profiles scan the cluster nodes and Platform profiles scan the Kubernetes platform. An empty value indicates there is no automated check. 2 Specify the XCCDF name of the rule, which is parsed directly from the datastream. 3 Specify the severity of the rule when it fails. Note The Rule object gets an appropriate label for an easy identification of the associated ProfileBundle object. The ProfileBundle also gets specified in the OwnerReferences of this object. 5.4.2.2.4. TailoredProfile object Use the TailoredProfile object to modify the default Profile object based on your organization requirements. You can enable or disable rules, set variable values, and provide justification for the customization. After validation, the TailoredProfile object creates a ConfigMap , which can be referenced by a ComplianceScan object. Tip You can use the TailoredProfile object by referencing it in a ScanSettingBinding object. For more information about ScanSettingBinding , see ScanSettingBinding object. Example TailoredProfile object apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4 1 This is optional. Name of the Profile object upon which the TailoredProfile is built. If no value is set, a new profile is created from the enableRules list. 2 Specifies the XCCDF name of the tailored profile. 3 Specifies the ConfigMap name, which can be used as the value of the tailoringConfigMap.name attribute of a ComplianceScan . 4 Shows the state of the object such as READY , PENDING , and FAILURE . If the state of the object is ERROR , then the attribute status.errorMessage provides the reason for the failure. With the TailoredProfile object, it is possible to create a new Profile object using the TailoredProfile construct. To create a new Profile , set the following configuration parameters : an appropriate title extends value must be empty scan type annotation on the TailoredProfile object: compliance.openshift.io/product-type: Platform/Node Note If you have not set the product-type annotation, the Compliance Operator defaults to Platform scan type. Adding the -node suffix to the name of the TailoredProfile object results in node scan type. 5.4.2.3. Configuring the compliance scan settings After you have defined the requirements of the compliance scan, you can configure it by specifying the type of the scan, occurrence of the scan, and location of the scan. To do so, Compliance Operator provides you with a ScanSetting object. 5.4.2.3.1. ScanSetting object Use the ScanSetting object to define and reuse the operational policies to run your scans. By default, the Compliance Operator creates the following ScanSetting objects: default - it runs a scan every day at 1 AM on both master and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Remediation is neither applied nor updated automatically. default-auto-apply - it runs a scan every day at 1AM on both control plane and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Both autoApplyRemediations and autoUpdateRemediations are set to true. Example ScanSetting object apiVersion: compliance.openshift.io/v1alpha1 autoApplyRemediations: true 1 autoUpdateRemediations: true 2 kind: ScanSetting maxRetryOnTimeout: 3 metadata: creationTimestamp: "2022-10-18T20:21:00Z" generation: 1 name: default-auto-apply namespace: openshift-compliance resourceVersion: "38840" uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 rawResultStorage: nodeSelector: node-role.kubernetes.io/master: "" pvAccessModes: - ReadWriteOnce rotation: 3 3 size: 1Gi 4 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: 5 - master - worker scanTolerations: - operator: Exists schedule: 0 1 * * * 6 showNotApplicable: false strictNodeScan: true timeout: 30m 1 Set to true to enable auto remediations. Set to false to disable auto remediations. 2 Set to true to enable auto remediations for content updates. Set to false to disable auto remediations for content updates. 3 Specify the number of stored scans in the raw result format. The default value is 3 . As the older results get rotated, the administrator must store the results elsewhere before the rotation happens. 4 Specify the storage size that should be created for the scan to store the raw results. The default value is 1Gi 6 Specify how often the scan should be run in cron format. Note To disable the rotation policy, set the value to 0 . 5 Specify the node-role.kubernetes.io label value to schedule the scan for Node type. This value has to match the name of a MachineConfigPool . 5.4.2.4. Processing the compliance scan requirements with compliance scans settings When you have defined the compliance scan requirements and configured the settings to run the scans, then the Compliance Operator processes it using the ScanSettingBinding object. 5.4.2.4.1. ScanSettingBinding object Use the ScanSettingBinding object to specify your compliance requirements with reference to the Profile or TailoredProfile object. It is then linked to a ScanSetting object, which provides the operational constraints for the scan. Then the Compliance Operator generates the ComplianceSuite object based on the ScanSetting and ScanSettingBinding objects. Example ScanSettingBinding object apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 1 Specify the details of Profile or TailoredProfile object to scan your environment. 2 Specify the operational constraints, such as schedule and storage size. The creation of ScanSetting and ScanSettingBinding objects results in the compliance suite. To get the list of compliance suite, run the following command: USD oc get compliancesuites Important If you delete ScanSettingBinding , then compliance suite also is deleted. 5.4.2.5. Tracking the compliance scans After the creation of compliance suite, you can monitor the status of the deployed scans using the ComplianceSuite object. 5.4.2.5.1. ComplianceSuite object The ComplianceSuite object helps you keep track of the state of the scans. It contains the raw settings to create scans and the overall result. For Node type scans, you should map the scan to the MachineConfigPool , since it contains the remediations for any issues. If you specify a label, ensure it directly applies to a pool. Example ComplianceSuite object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name of the scan> spec: autoApplyRemediations: false 1 schedule: "0 1 * * *" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" nodeSelector: node-role.kubernetes.io/worker: "" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT 1 Set to true to enable auto remediations. Set to false to disable auto remediations. 2 Specify how often the scan should be run in cron format. 3 Specify a list of scan specifications to run in the cluster. 4 Indicates the progress of the scans. 5 Indicates the overall verdict of the suite. The suite in the background creates the ComplianceScan object based on the scans parameter. You can programmatically fetch the ComplianceSuites events. To get the events for the suite, run the following command: USD oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite> Important You might create errors when you manually define the ComplianceSuite , since it contains the XCCDF attributes. 5.4.2.5.2. Advanced ComplianceScan Object The Compliance Operator includes options for advanced users for debugging or integrating with existing tooling. While it is recommended that you not create a ComplianceScan object directly, you can instead manage it using a ComplianceSuite object. Example Advanced ComplianceScan object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name of the scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3 rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" 4 nodeSelector: 5 node-role.kubernetes.io/worker: "" status: phase: DONE 6 result: NON-COMPLIANT 7 1 Specify either Node or Platform . Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform. 2 Specify the XCCDF identifier of the profile that you want to run. 3 Specify the container image that encapsulates the profile files. 4 It is optional. Specify the scan to run a single rule. This rule has to be identified with the XCCDF ID, and has to belong to the specified profile. Note If you skip the rule parameter, then scan runs for all the available rules of the specified profile. 5 If you are on the OpenShift Container Platform and wants to generate a remediation, then nodeSelector label has to match the MachineConfigPool label. Note If you do not specify nodeSelector parameter or match the MachineConfig label, scan will still run, but it will not create remediation. 6 Indicates the current phase of the scan. 7 Indicates the verdict of the scan. Important If you delete a ComplianceSuite object, then all the associated scans get deleted. When the scan is complete, it generates the result as Custom Resources of the ComplianceCheckResult object. However, the raw results are available in ARF format. These results are stored in a Persistent Volume (PV), which has a Persistent Volume Claim (PVC) associated with the name of the scan. You can programmatically fetch the ComplianceScans events. To generate events for the suite, run the following command: oc get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name of the suite> 5.4.2.6. Viewing the compliance results When the compliance suite reaches the DONE phase, you can view the scan results and possible remediations. 5.4.2.6.1. ComplianceCheckResult object When you run a scan with a specific profile, several rules in the profiles are verified. For each of these rules, a ComplianceCheckResult object is created, which provides the state of the cluster for a specific rule. Example ComplianceCheckResult object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2 1 Describes the severity of the scan check. 2 Describes the result of the check. The possible values are: PASS: check was successful. FAIL: check was unsuccessful. INFO: check was successful and found something not severe enough to be considered an error. MANUAL: check cannot automatically assess the status and manual check is required. INCONSISTENT: different nodes report different results. ERROR: check run successfully, but could not complete. NOTAPPLICABLE: check did not run as it is not applicable. To get all the check results from a suite, run the following command: oc get compliancecheckresults \ -l compliance.openshift.io/suite=workers-compliancesuite 5.4.2.6.2. ComplianceRemediation object For a specific check you can have a datastream specified fix. However, if a Kubernetes fix is available, then the Compliance Operator creates a ComplianceRemediation object. Example ComplianceRemediation object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3 1 true indicates the remediation was applied. false indicates the remediation was not applied. 2 Includes the definition of the remediation. 3 Indicates remediation that was previously parsed from an earlier version of the content. The Compliance Operator still retains the outdated objects to give the administrator a chance to review the new remediations before applying them. To get all the remediations from a suite, run the following command: oc get complianceremediations \ -l compliance.openshift.io/suite=workers-compliancesuite To list all failing checks that can be remediated automatically, run the following command: oc get compliancecheckresults \ -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation' To list all failing checks that can be remediated manually, run the following command: oc get compliancecheckresults \ -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation' 5.5. Compliance Operator management 5.5.1. Installing the Compliance Operator Before you can use the Compliance Operator, you must ensure it is deployed in the cluster. Important The Compliance Operator might report incorrect results on managed platforms, such as OpenShift Dedicated, Red Hat OpenShift Service on AWS Classic, and Microsoft Azure Red Hat OpenShift. For more information, see the Knowledgebase article Compliance Operator reports incorrect results on Managed Services . Important Before deploying the Compliance Operator, you are required to define persistent storage in your cluster to store the raw results output. For more information, see Persistant storage overview and Managing the default storage class . 5.5.1.1. Installing the Compliance Operator through the web console Prerequisites You must have admin privileges. You must have a StorageClass resource configured. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Compliance Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-compliance namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Compliance Operator is installed in the openshift-compliance namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-compliance project that are reporting issues. Important If the restricted Security Context Constraints (SCC) have been modified to contain the system:authenticated group or has added requiredDropCapabilities , the Compliance Operator may not function properly due to permissions issues. You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator . 5.5.1.2. Installing the Compliance Operator using the CLI Prerequisites You must have admin privileges. You must have a StorageClass resource configured. Procedure Define a Namespace object: Example namespace-object.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance 1 In OpenShift Container Platform 4.14, the pod security label must be set to privileged at the namespace level. Create the Namespace object: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription object: USD oc create -f subscription-object.yaml Note If you are setting the global scheduler feature and enable defaultNodeSelector , you must create the namespace manually and update the annotations of the openshift-compliance namespace, or the namespace where the Compliance Operator was installed, with openshift.io/node-selector: "" . This removes the default node selector and prevents deployment failures. Verification Verify the installation succeeded by inspecting the CSV file: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running: USD oc get deploy -n openshift-compliance 5.5.1.3. Installing the Compliance Operator on ROSA hosted control planes (HCP) As of the Compliance Operator 1.5.0 release, the Operator is tested against Red Hat OpenShift Service on AWS using Hosted control planes. Red Hat OpenShift Service on AWS Hosted control planes clusters have restricted access to the control plane, which is managed by Red Hat. By default, the Compliance Operator will schedule to nodes within the master node pool, which is not available in Red Hat OpenShift Service on AWS Hosted control planes installations. This requires you to configure the Subscription object in a way that allows the Operator to schedule on available node pools. This step is necessary for a successful installation on Red Hat OpenShift Service on AWS Hosted control planes clusters. Prerequisites You must have admin privileges. You must have a StorageClass resource configured. Procedure Define a Namespace object: Example namespace-object.yaml file apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance 1 In OpenShift Container Platform 4.14, the pod security label must be set to privileged at the namespace level. Create the Namespace object by running the following command: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml file apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object by running the following command: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml file apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: "" 1 1 Update the Operator deployment to deploy on worker nodes. Create the Subscription object by running the following command: USD oc create -f subscription-object.yaml Verification Verify that the installation succeeded by running the following command to inspect the cluster service version (CSV) file: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running by using the following command: USD oc get deploy -n openshift-compliance Important If the restricted Security Context Constraints (SCC) have been modified to contain the system:authenticated group or has added requiredDropCapabilities , the Compliance Operator may not function properly due to permissions issues. You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator . 5.5.1.4. Installing the Compliance Operator on Hypershift hosted control planes The Compliance Operator can be installed in hosted control planes using the OperatorHub by creating a Subscription file. Important Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You must have admin privileges. Procedure Define a Namespace object similar to the following: Example namespace-object.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance 1 In OpenShift Container Platform 4.14, the pod security label must be set to privileged at the namespace level. Create the Namespace object by running the following command: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object by running the following command: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: "" env: - name: PLATFORM value: "HyperShift" Create the Subscription object by running the following command: USD oc create -f subscription-object.yaml Verification Verify the installation succeeded by inspecting the CSV file by running the following command: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running by running the following command: USD oc get deploy -n openshift-compliance Additional resources Hosted control planes overview 5.5.1.5. Additional resources The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . 5.5.2. Updating the Compliance Operator As a cluster administrator, you can update the Compliance Operator on your OpenShift Container Platform cluster. Important It is recommended to update the Compliance Operator to version 1.3.1 or later before updating your OpenShift Container Platform cluster to version 4.14 or later. 5.5.2.1. Preparing for an Operator update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel. The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). Note You cannot change installed Operators to a channel that is older than the current channel. Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators: Red Hat OpenShift Container Platform Operator Update Information Checker You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included. 5.5.2.2. Changing the update channel for an Operator You can change the update channel for an Operator by using the OpenShift Container Platform web console. Tip If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Click the name of the Operator you want to change the update channel for. Click the Subscription tab. Click the name of the update channel under Update channel . Click the newer update channel that you want to change to, then click Save . For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab. 5.5.2.3. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any updates requiring approval are displayed to Upgrade status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 5.5.3. Managing the Compliance Operator This section describes the lifecycle of security content, including how to use an updated version of compliance content and how to create a custom ProfileBundle object. 5.5.3.1. ProfileBundle CR example The ProfileBundle object requires two pieces of information: the URL of a container image that contains the contentImage and the file that contains the compliance content. The contentFile parameter is relative to the root of the file system. You can define the built-in rhcos4 ProfileBundle object as shown in the following example: apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: "2022-10-19T12:06:30Z" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: "46741" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml 1 contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2 status: conditions: - lastTransitionTime: "2022-10-19T12:07:51Z" message: Profile bundle successfully parsed reason: Valid status: "True" type: Ready dataStreamStatus: VALID 1 Location of the file containing the compliance content. 2 Content image location. Important The base image used for the content images must include coreutils . 5.5.3.2. Updating security content Security content is included as container images that the ProfileBundle objects refer to. To accurately track updates to ProfileBundles and the custom resources parsed from the bundles such as rules or profiles, identify the container image with the compliance content using a digest instead of a tag: USD oc -n openshift-compliance get profilebundles rhcos4 -oyaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: "2022-10-19T12:06:30Z" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: "46741" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1 status: conditions: - lastTransitionTime: "2022-10-19T12:07:51Z" message: Profile bundle successfully parsed reason: Valid status: "True" type: Ready dataStreamStatus: VALID 1 Security container image. Each ProfileBundle is backed by a deployment. When the Compliance Operator detects that the container image digest has changed, the deployment is updated to reflect the change and parse the content again. Using the digest instead of a tag ensures that you use a stable and predictable set of profiles. 5.5.3.3. Additional resources The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . 5.5.4. Uninstalling the Compliance Operator You can remove the OpenShift Compliance Operator from your cluster by using the OpenShift Container Platform web console or the CLI. 5.5.4.1. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform using the web console To remove the Compliance Operator, you must first delete the objects in the namespace. After the objects are removed, you can remove the Operator and its namespace by deleting the openshift-compliance project. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. The OpenShift Compliance Operator must be installed. Procedure To remove the Compliance Operator by using the OpenShift Container Platform web console: Go to the Operators Installed Operators Compliance Operator page. Click All instances . In All namespaces , click the Options menu and delete all ScanSettingBinding, ComplainceSuite, ComplianceScan, and ProfileBundle objects. Switch to the Administration Operators Installed Operators page. Click the Options menu on the Compliance Operator entry and select Uninstall Operator . Switch to the Home Projects page. Search for 'compliance'. Click the Options menu to the openshift-compliance project, and select Delete Project . Confirm the deletion by typing openshift-compliance in the dialog box, and click Delete . 5.5.4.2. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform using the CLI To remove the Compliance Operator, you must first delete the objects in the namespace. After the objects are removed, you can remove the Operator and its namespace by deleting the openshift-compliance project. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. The OpenShift Compliance Operator must be installed. Procedure Delete all objects in the namespace. Delete the ScanSettingBinding objects: USD oc delete ssb --all -n openshift-compliance Delete the ScanSetting objects: USD oc delete ss --all -n openshift-compliance Delete the ComplianceSuite objects: USD oc delete suite --all -n openshift-compliance Delete the ComplianceScan objects: USD oc delete scan --all -n openshift-compliance Delete the ProfileBundle objects: USD oc delete profilebundle.compliance --all -n openshift-compliance Delete the Subscription object: USD oc delete sub --all -n openshift-compliance Delete the CSV object: USD oc delete csv --all -n openshift-compliance Delete the project: USD oc delete project openshift-compliance Example output project.project.openshift.io "openshift-compliance" deleted Verification Confirm the namespace is deleted: USD oc get project/openshift-compliance Example output Error from server (NotFound): namespaces "openshift-compliance" not found 5.6. Compliance Operator scan management 5.6.1. Supported compliance profiles There are several profiles available as part of the Compliance Operator (CO) installation. While you can use the following profiles to assess gaps in a cluster, usage alone does not infer or guarantee compliance with a particular profile and is not an auditor. In order to be compliant or certified under these various standards, you need to engage an authorized auditor such as a Qualified Security Assessor (QSA), Joint Authorization Board (JAB), or other industry recognized regulatory authority to assess your environment. You are required to work with an authorized auditor to achieve compliance with a standard. For more information on compliance support for all Red Hat products, see Product Compliance . Important The Compliance Operator might report incorrect results on some managed platforms, such as OpenShift Dedicated and Azure Red Hat OpenShift. For more information, see the Red Hat Knowledgebase Solution #6983418 . 5.6.1.1. Compliance profiles The Compliance Operator provides profiles to meet industry standard benchmarks. Note The following tables reflect the latest available profiles in the Compliance Operator. 5.6.1.1.1. CIS compliance profiles Table 5.1. Supported CIS compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-cis [1] CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Platform CIS Benchmarks TM [1] x86_64 ppc64le s390x ocp4-cis-1-4 [3] CIS Red Hat OpenShift Container Platform Benchmark v1.4.0 Platform CIS Benchmarks TM [4] x86_64 ppc64le s390x ocp4-cis-1-5 CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Platform CIS Benchmarks TM [4] x86_64 ppc64le s390x ocp4-cis-node [1] CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Node [2] CIS Benchmarks TM [4] x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-cis-node-1-4 [3] CIS Red Hat OpenShift Container Platform Benchmark v1.4.0 Node [2] CIS Benchmarks TM [4] x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-cis-node-1-5 CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Node [2] CIS Benchmarks TM [4] x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-cis and ocp4-cis-node profiles maintain the most up-to-date version of the CIS benchmark as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as CIS v1.4.0, use the ocp4-cis-1-4 and ocp4-cis-node-1-4 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . CIS v1.4.0 is superceded by CIS v1.5.0. It is recommended to apply the latest profile to your environment. To locate the CIS OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and click Download Latest CIS Benchmark , where you can then register to download the benchmark. 5.6.1.1.2. Essential Eight compliance profiles Table 5.2. Supported Essential Eight compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-e8 Australian Cyber Security Centre (ACSC) Essential Eight Platform ACSC Hardening Linux Workstations and Servers x86_64 rhcos4-e8 Australian Cyber Security Centre (ACSC) Essential Eight Node ACSC Hardening Linux Workstations and Servers x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) 5.6.1.1.3. FedRAMP High compliance profiles Table 5.3. Supported FedRAMP High compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-high [1] NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 ocp4-high-node [1] NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-high-node-rev-4 NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-high-rev-4 NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 rhcos4-high [1] NIST 800-53 High-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-high-rev-4 NIST 800-53 High-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-high , ocp4-high-node and rhcos4-high profiles maintain the most up-to-date version of the FedRAMP High standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as FedRAMP high R4, use the ocp4-high-rev-4 and ocp4-high-node-rev-4 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . 5.6.1.1.4. FedRAMP Moderate compliance profiles Table 5.4. Supported FedRAMP Moderate compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-moderate [1] NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 ppc64le s390x ocp4-moderate-node [1] NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-moderate-node-rev-4 NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-moderate-rev-4 NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 ppc64le s390x rhcos4-moderate [1] NIST 800-53 Moderate-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-moderate-rev-4 NIST 800-53 Moderate-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-moderate , ocp4-moderate-node and rhcos4-moderate profiles maintain the most up-to-date version of the FedRAMP Moderate standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as FedRAMP Moderate R4, use the ocp4-moderate-rev-4 and ocp4-moderate-node-rev-4 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . 5.6.1.1.5. NERC-CIP compliance profiles Table 5.5. Supported NERC-CIP compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-nerc-cip North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the OpenShift Container Platform - Platform level Platform NERC CIP Standards x86_64 ocp4-nerc-cip-node North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the OpenShift Container Platform - Node level Node [1] NERC CIP Standards x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-nerc-cip North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for Red Hat Enterprise Linux CoreOS Node NERC CIP Standards x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . 5.6.1.1.6. PCI-DSS compliance profiles Table 5.6. Supported PCI-DSS compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-pci-dss [1] PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Platform PCI Security Standards (R) Council Document Library x86_64 ocp4-pci-dss-3-2 [3] PCI-DSS v3.2.1 Control Baseline for OpenShift Container Platform 4 Platform PCI Security Standards (R) Council Document Library x86_64 ppc64le s390x ocp4-pci-dss-4-0 PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Platform PCI Security Standards (R) Council Document Library x86_64 ocp4-pci-dss-node [1] PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Node [2] PCI Security Standards (R) Council Document Library x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-pci-dss-node-3-2 [3] PCI-DSS v3.2.1 Control Baseline for OpenShift Container Platform 4 Node [2] PCI Security Standards (R) Council Document Library x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-pci-dss-node-4-0 PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Node [2] PCI Security Standards (R) Council Document Library x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-pci-dss and ocp4-pci-dss-node profiles maintain the most up-to-date version of the PCI-DSS standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as PCI-DSS v3.2.1, use the ocp4-pci-dss-3-2 and ocp4-pci-dss-node-3-2 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . PCI-DSS v3.2.1 is superceded by PCI-DSS v4. It is recommended to apply the latest profile to your environment. 5.6.1.1.7. STIG compliance profiles Table 5.7. Supported STIG compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-stig [1] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Platform DISA-STIG x86_64 ocp4-stig-node [1] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Node [2] DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-stig-node-v1r1 [3] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 Node [2] DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-stig-node-v2r1 Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 Node [2] DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-stig-v1r1 [3] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 Platform DISA-STIG x86_64 ocp4-stig-v2r1 Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 Platform DISA-STIG x86_64 rhcos4-stig Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Node DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-stig-v1r1 [3] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 Node DISA-STIG [3] x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-stig-v2r1 Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 Node DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-stig , ocp4-stig-node and rhcos4-stig profiles maintain the most up-to-date version of the DISA-STIG benchmark as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as DISA-STIG V2R1, use the ocp4-stig-v2r1 and ocp4-stig-node-v2r1 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . DISA-STIG V1R1 is superceded by DISA-STIG V2R1. It is recommended to apply the latest profile to your environment. 5.6.1.1.8. About extended compliance profiles Some compliance profiles have controls that require following industry best practices, resulting in some profiles extending others. Combining the Center for Internet Security (CIS) best practices with National Institute of Standards and Technology (NIST) security frameworks establishes a path to a secure and compliant environment. For example, the NIST High-Impact and Moderate-Impact profiles extend the CIS profile to achieve compliance. As a result, extended compliance profiles eliminate the need to run both profiles in a single cluster. Table 5.8. Profile extensions Profile Extends ocp4-pci-dss ocp4-cis ocp4-pci-dss-node ocp4-cis-node ocp4-high ocp4-cis ocp4-high-node ocp4-cis-node ocp4-moderate ocp4-cis ocp4-moderate-node ocp4-cis-node ocp4-nerc-cip ocp4-moderate ocp4-nerc-cip-node ocp4-moderate-node 5.6.1.2. Additional resources Compliance Operator profile types 5.6.2. Compliance Operator scans The ScanSetting and ScanSettingBinding APIs are recommended to run compliance scans with the Compliance Operator. For more information on these API objects, run: USD oc explain scansettings or USD oc explain scansettingbindings 5.6.2.1. Running compliance scans You can run a scan using the Center for Internet Security (CIS) profiles. For convenience, the Compliance Operator creates a ScanSetting object with reasonable defaults on startup. This ScanSetting object is named default . Note For all-in-one control plane and worker nodes, the compliance scan runs twice on the worker and control plane nodes. The compliance scan might generate inconsistent scan results. You can avoid inconsistent results by defining only a single role in the ScanSetting object. For more information about inconsistent scan results, see Compliance Operator shows INCONSISTENT scan result with worker node . Procedure Inspect the ScanSetting object by running the following command: USD oc describe scansettings default -n openshift-compliance Example output Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Max Retry On Timeout: 3 Metadata: Creation Timestamp: 2024-07-16T14:56:42Z Generation: 2 Resource Version: 91655682 UID: 50358cf1-57a8-4f69-ac50-5c7a5938e402 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce 1 Rotation: 3 2 Size: 1Gi 3 Storage Class Name: standard 4 Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master 5 worker 6 Scan Tolerations: 7 Operator: Exists Schedule: 0 1 * * * 8 Show Not Applicable: false Strict Node Scan: true Suspend: false Timeout: 30m Events: <none> 1 The Compliance Operator creates a persistent volume (PV) that contains the results of the scans. By default, the PV will use access mode ReadWriteOnce because the Compliance Operator cannot make any assumptions about the storage classes configured on the cluster. Additionally, ReadWriteOnce access mode is available on most clusters. If you need to fetch the scan results, you can do so by using a helper pod, which also binds the volume. Volumes that use the ReadWriteOnce access mode can be mounted by only one pod at time, so it is important to remember to delete the helper pods. Otherwise, the Compliance Operator will not be able to reuse the volume for subsequent scans. 2 The Compliance Operator keeps results of three subsequent scans in the volume; older scans are rotated. 3 The Compliance Operator will allocate one GB of storage for the scan results. 4 The scansetting.rawResultStorage.storageClassName field specifies the storageClassName value to use when creating the PersistentVolumeClaim object to store the raw results. The default value is null, which will attempt to use the default storage class configured in the cluster. If there is no default class specified, then you must set a default class. 5 6 If the scan setting uses any profiles that scan cluster nodes, scan these node roles. 7 The default scan setting object scans all the nodes. 8 The default scan setting object runs scans at 01:00 each day. As an alternative to the default scan setting, you can use default-auto-apply , which has the following settings: Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true 1 Auto Update Remediations: true 2 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: f:autoUpdateRemediations: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none> 1 2 Setting autoUpdateRemediations and autoApplyRemediations flags to true allows you to easily create ScanSetting objects that auto-remediate without extra steps. Create a ScanSettingBinding object that binds to the default ScanSetting object and scans the cluster using the cis and cis-node profiles. For example: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 Create the ScanSettingBinding object by running: USD oc create -f <file-name>.yaml -n openshift-compliance At this point in the process, the ScanSettingBinding object is reconciled and based on the Binding and the Bound settings. The Compliance Operator creates a ComplianceSuite object and the associated ComplianceScan objects. Follow the compliance scan progress by running: USD oc get compliancescan -w -n openshift-compliance The scans progress through the scanning phases and eventually reach the DONE phase when complete. In most cases, the result of the scan is NON-COMPLIANT . You can review the scan results and start applying remediations to make the cluster compliant. See Managing Compliance Operator remediation for more information. 5.6.2.2. Setting custom storage size for results While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size attribute that is exposed in both the ScanSetting and ComplianceScan resources. A related parameter is rawResultStorage.rotation which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment. 5.6.2.2.1. Using custom result storage values Because OpenShift Container Platform can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName attribute. Important If your cluster does not specify a default storage class, this attribute must be set. Configure the ScanSetting custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results: Example ScanSetting CR apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' 5.6.2.3. Scheduling the result server pod on a worker node The result server pod mounts the persistent volume (PV) that stores the raw Asset Reporting Format (ARF) scan results. The nodeSelector and tolerations attributes enable you to configure the location of the result server pod. This is helpful for those environments where control plane nodes are not permitted to mount persistent volumes. Procedure Create a ScanSetting custom resource (CR) for the Compliance Operator: Define the ScanSetting CR, and save the YAML file, for example, rs-workers.yaml : apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * 1 The Compliance Operator uses this node to store scan results in ARF format. 2 The result server pod tolerates all taints. To create the ScanSetting CR, run the following command: USD oc create -f rs-workers.yaml Verification To verify that the ScanSetting object is created, run the following command: USD oc get scansettings rs-on-workers -n openshift-compliance -o yaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: "2021-11-19T19:36:36Z" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: "48305" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true 5.6.2.4. ScanSetting Custom Resource The ScanSetting Custom Resource now allows you to override the default CPU and memory limits of scanner pods through the scan limits attribute. The Compliance Operator will use defaults of 500Mi memory, 100m CPU for the scanner container, and 200Mi memory with 100m CPU for the api-resource-collector container. To set the memory limits of the Operator, modify the Subscription object if installed through OLM or the Operator deployment itself. To increase the default CPU and memory limits of the Compliance Operator, see Increasing Compliance Operator resource limits . Important Increasing the memory limit for the Compliance Operator or the scanner pods is needed if the default limits are not sufficient and the Operator or scanner pods are ended by the Out Of Memory (OOM) process. 5.6.2.5. Configuring the hosted control planes management cluster If you are hosting your own Hosted control plane or Hypershift environment and want to scan a Hosted Cluster from the management cluster, you will need to set the name and prefix namespace for the target Hosted Cluster. You can achieve this by creating a TailoredProfile . Important This procedure only applies to users managing their own hosted control planes environment. Note Only ocp4-cis and ocp4-pci-dss profiles are supported in hosted control planes management clusters. Prerequisites The Compliance Operator is installed in the management cluster. Procedure Obtain the name and namespace of the hosted cluster to be scanned by running the following command: USD oc get hostedcluster -A Example output NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE local-cluster 79136a1bdb84b3c13217 4.13.5 79136a1bdb84b3c13217-admin-kubeconfig Completed True False The hosted control plane is available In the management cluster, create a TailoredProfile extending the scan Profile and define the name and namespace of the Hosted Cluster to be scanned: Example management-tailoredprofile.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: hypershift-cisk57aw88gry namespace: openshift-compliance spec: description: This profile test required rules extends: ocp4-cis 1 title: Management namespace profile setValues: - name: ocp4-hypershift-cluster rationale: This value is used for HyperShift version detection value: 79136a1bdb84b3c13217 2 - name: ocp4-hypershift-namespace-prefix rationale: This value is used for HyperShift control plane namespace detection value: local-cluster 3 1 Variable. Only ocp4-cis and ocp4-pci-dss profiles are supported in hosted control planes management clusters. 2 The value is the NAME from the output in the step. 3 The value is the NAMESPACE from the output in the step. Create the TailoredProfile : USD oc create -n openshift-compliance -f mgmt-tp.yaml 5.6.2.6. Applying resource requests and limits When the kubelet starts a container as part of a Pod, the kubelet passes that container's requests and limits for memory and CPU to the container runtime. In Linux, the container runtime configures the kernel cgroups that apply and enforce the limits you defined. The CPU limit defines how much CPU time the container can use. During each scheduling interval, the Linux kernel checks to see if this limit is exceeded. If so, the kernel waits before allowing the cgroup to resume execution. If several different containers (cgroups) want to run on a contended system, workloads with larger CPU requests are allocated more CPU time than workloads with small requests. The memory request is used during Pod scheduling. On a node that uses cgroups v2, the container runtime might use the memory request as a hint to set memory.min and memory.low values. If a container attempts to allocate more memory than this limit, the Linux kernel out-of-memory subsystem activates and intervenes by stopping one of the processes in the container that tried to allocate memory. The memory limit for the Pod or container can also apply to pages in memory-backed volumes, such as an emptyDir. The kubelet tracks tmpfs emptyDir volumes as container memory is used, rather than as local ephemeral storage. If a container exceeds its memory request and the node that it runs on becomes short of memory overall, the Pod's container might be evicted. Important A container may not exceed its CPU limit for extended periods. Container run times do not stop Pods or containers for excessive CPU usage. To determine whether a container cannot be scheduled or is being killed due to resource limits, see Troubleshooting the Compliance Operator . 5.6.2.7. Scheduling Pods with container resource requests When a Pod is created, the scheduler selects a Node for the Pod to run on. Each node has a maximum capacity for each resource type in the amount of CPU and memory it can provide for the Pods. The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity nodes for each resource type. Although memory or CPU resource usage on nodes is very low, the scheduler might still refuse to place a Pod on a node if the capacity check fails to protect against a resource shortage on a node. For each container, you can specify the following resource limits and request: spec.containers[].resources.limits.cpu spec.containers[].resources.limits.memory spec.containers[].resources.limits.hugepages-<size> spec.containers[].resources.requests.cpu spec.containers[].resources.requests.memory spec.containers[].resources.requests.hugepages-<size> Although you can specify requests and limits for only individual containers, it is also useful to consider the overall resource requests and limits for a pod. For a particular resource, a container resource request or limit is the sum of the resource requests or limits of that type for each container in the pod. Example container resource requests and limits apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: 1 memory: "64Mi" cpu: "250m" limits: 2 memory: "128Mi" cpu: "500m" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" 1 The container is requesting 64 Mi of memory and 250 m CPU. 2 The container's limits are 128 Mi of memory and 500 m CPU. 5.6.3. Tailoring the Compliance Operator While the Compliance Operator comes with ready-to-use profiles, they must be modified to fit the organizations' needs and requirements. The process of modifying a profile is called tailoring . The Compliance Operator provides the TailoredProfile object to help tailor profiles. 5.6.3.1. Creating a new tailored profile You can write a tailored profile from scratch by using the TailoredProfile object. Set an appropriate title and description and leave the extends field empty. Indicate to the Compliance Operator what type of scan this custom profile will generate: Node scan: Scans the Operating System. Platform scan: Scans the OpenShift Container Platform configuration. Procedure Set the following annotation on the TailoredProfile object: Example new-profile.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: new-profile annotations: compliance.openshift.io/product-type: Node 1 spec: extends: ocp4-cis-node 2 description: My custom profile 3 title: Custom profile 4 enableRules: - name: ocp4-etcd-unique-ca rationale: We really need to enable this disableRules: - name: ocp4-file-groupowner-cni-conf rationale: This does not apply to the cluster 1 Set Node or Platform accordingly. 2 The extends field is optional. 3 Use the description field to describe the function of the new TailoredProfile object. 4 Give your TailoredProfile object a title with the title field. Note Adding the -node suffix to the name field of the TailoredProfile object is similar to adding the Node product type annotation and generates an Operating System scan. 5.6.3.2. Using tailored profiles to extend existing ProfileBundles While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it. The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map, which must contain a key called tailoring.xml and the value of this key is the tailoring contents. Procedure Browse the available rules for the Red Hat Enterprise Linux CoreOS (RHCOS) ProfileBundle : USD oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4 Browse the available variables in the same ProfileBundle : USD oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4 Create a tailored profile named nist-moderate-modified : Choose which rules you want to add to the nist-moderate-modified tailored profile. This example extends the rhcos4-moderate profile by disabling two rules and changing one value. Use the rationale value to describe why these changes were made: Example new-profile-node.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive Table 5.9. Attributes for spec variables Attribute Description extends Name of the Profile object upon which this TailoredProfile is built. title Human-readable title of the TailoredProfile . disableRules A list of name and rationale pairs. Each name refers to a name of a rule object that is to be disabled. The rationale value is human-readable text describing why the rule is disabled. manualRules A list of name and rationale pairs. When a manual rule is added, the check result status will always be manual and remediation will not be generated. This attribute is automatic and by default has no values when set as a manual rule. enableRules A list of name and rationale pairs. Each name refers to a name of a rule object that is to be enabled. The rationale value is human-readable text describing why the rule is enabled. description Human-readable text describing the TailoredProfile . setValues A list of name, rationale, and value groupings. Each name refers to a name of the value set. The rationale is human-readable text describing the set. The value is the actual setting. Add the tailoredProfile.spec.manualRules attribute: Example tailoredProfile.spec.manualRules.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: ocp4-manual-scc-check spec: extends: ocp4-cis description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL title: OCP4 CIS profile with manual SCC check manualRules: - name: ocp4-scc-limit-container-allowed-capabilities rationale: We use third party software that installs its own SCC with extra privileges Create the TailoredProfile object: USD oc create -n openshift-compliance -f new-profile-node.yaml 1 1 The TailoredProfile object is created in the default openshift-compliance namespace. Example output tailoredprofile.compliance.openshift.io/nist-moderate-modified created Define the ScanSettingBinding object to bind the new nist-moderate-modified tailored profile to the default ScanSetting object. Example new-scansettingbinding.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default Create the ScanSettingBinding object: USD oc create -n openshift-compliance -f new-scansettingbinding.yaml Example output scansettingbinding.compliance.openshift.io/nist-moderate-modified created 5.6.4. Retrieving Compliance Operator raw results When proving compliance for your OpenShift Container Platform cluster, you might need to provide the scan results for auditing purposes. 5.6.4.1. Obtaining Compliance Operator raw results from a persistent volume Procedure The Compliance Operator generates and stores the raw results in a persistent volume. These results are in Asset Reporting Format (ARF). Explore the ComplianceSuite object: USD oc get compliancesuites nist-moderate-modified \ -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage' Example output { "name": "ocp4-moderate", "namespace": "openshift-compliance" } { "name": "nist-moderate-modified-master", "namespace": "openshift-compliance" } { "name": "nist-moderate-modified-worker", "namespace": "openshift-compliance" } This shows the persistent volume claims where the raw results are accessible. Verify the raw data location by using the name and namespace of one of the results: USD oc get pvc -n openshift-compliance rhcos4-moderate-worker Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m Fetch the raw results by spawning a pod that mounts the volume and copying the results: USD oc create -n openshift-compliance -f pod.yaml Example pod.yaml apiVersion: "v1" kind: Pod metadata: name: pv-extract spec: containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi9/ubi command: ["sleep", "3000"] volumeMounts: - mountPath: "/workers-scan-results" name: workers-scan-vol volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker After the pod is running, download the results: USD oc cp pv-extract:/workers-scan-results -n openshift-compliance . Important Spawning a pod that mounts the persistent volume will keep the claim as Bound . If the volume's storage class in use has permissions set to ReadWriteOnce , the volume is only mountable by one pod at a time. You must delete the pod upon completion, or it will not be possible for the Operator to schedule a pod and continue storing results in this location. After the extraction is complete, the pod can be deleted: USD oc delete pod pv-extract -n openshift-compliance 5.6.5. Managing Compliance Operator result and remediation Each ComplianceCheckResult represents a result of one compliance rule check. If the rule can be remediated automatically, a ComplianceRemediation object with the same name, owned by the ComplianceCheckResult is created. Unless requested, the remediations are not applied automatically, which gives an OpenShift Container Platform administrator the opportunity to review what the remediation does and only apply a remediation once it has been verified. Important Full remediation for Federal Information Processing Standards (FIPS) compliance requires enabling FIPS mode for the cluster. To enable FIPS mode, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . FIPS mode is supported on the following architectures: x86_64 ppc64le s390x 5.6.5.1. Filters for compliance check results By default, the ComplianceCheckResult objects are labeled with several useful labels that allow you to query the checks and decide on the steps after the results are generated. List checks that belong to a specific suite: USD oc get -n openshift-compliance compliancecheckresults \ -l compliance.openshift.io/suite=workers-compliancesuite List checks that belong to a specific scan: USD oc get -n openshift-compliance compliancecheckresults \ -l compliance.openshift.io/scan=workers-scan Not all ComplianceCheckResult objects create ComplianceRemediation objects. Only ComplianceCheckResult objects that can be remediated automatically do. A ComplianceCheckResult object has a related remediation if it is labeled with the compliance.openshift.io/automated-remediation label. The name of the remediation is the same as the name of the check. List all failing checks that can be remediated automatically: USD oc get -n openshift-compliance compliancecheckresults \ -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation' List all failing checks sorted by severity: USD oc get compliancecheckresults -n openshift-compliance \ -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high' Example output NAME STATUS SEVERITY nist-moderate-modified-master-configure-crypto-policy FAIL high nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-master-enable-fips-mode FAIL high nist-moderate-modified-master-no-empty-passwords FAIL high nist-moderate-modified-master-selinux-state FAIL high nist-moderate-modified-worker-configure-crypto-policy FAIL high nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-worker-enable-fips-mode FAIL high nist-moderate-modified-worker-no-empty-passwords FAIL high nist-moderate-modified-worker-selinux-state FAIL high ocp4-moderate-configure-network-policies-namespaces FAIL high ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high List all failing checks that must be remediated manually: USD oc get -n openshift-compliance compliancecheckresults \ -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation' The manual remediation steps are typically stored in the description attribute in the ComplianceCheckResult object. Table 5.10. ComplianceCheckResult Status ComplianceCheckResult Status Description PASS Compliance check ran to completion and passed. FAIL Compliance check ran to completion and failed. INFO Compliance check ran to completion and found something not severe enough to be considered an error. MANUAL Compliance check does not have a way to automatically assess the success or failure and must be checked manually. INCONSISTENT Compliance check reports different results from different sources, typically cluster nodes. ERROR Compliance check ran, but could not complete properly. NOT-APPLICABLE Compliance check did not run because it is not applicable or not selected. 5.6.5.2. Reviewing a remediation Review both the ComplianceRemediation object and the ComplianceCheckResult object that owns the remediation. The ComplianceCheckResult object contains human-readable descriptions of what the check does and the hardening trying to prevent, as well as other metadata like the severity and the associated security controls. The ComplianceRemediation object represents a way to fix the problem described in the ComplianceCheckResult . After first scan, check for remediations with the state MissingDependencies . Below is an example of a check and a remediation called sysctl-net-ipv4-conf-all-accept-redirects . This example is redacted to only show spec and status and omits metadata : spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied The remediation payload is stored in the spec.current attribute. The payload can be any Kubernetes object, but because this remediation was produced by a node scan, the remediation payload in the above example is a MachineConfig object. For Platform scans, the remediation payload is often a different kind of an object (for example, a ConfigMap or Secret object), but typically applying that remediation is up to the administrator, because otherwise the Compliance Operator would have required a very broad set of permissions to manipulate any generic Kubernetes object. An example of remediating a Platform check is provided later in the text. To see exactly what the remediation does when applied, the MachineConfig object contents use the Ignition objects for the configuration. See the Ignition specification for further information about the format. In our example, the spec.config.storage.files[0].path attribute specifies the file that is being create by this remediation ( /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf ) and the spec.config.storage.files[0].contents.source attribute specifies the contents of that file. Note The contents of the files are URL-encoded. Use the following Python script to view the contents: USD echo "net.ipv4.conf.all.accept_redirects%3D0" | python3 -c "import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))" Example output net.ipv4.conf.all.accept_redirects=0 Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.3. Applying remediation when using customized machine config pools When you create a custom MachineConfigPool , add a label to the MachineConfigPool so that machineConfigPoolSelector present in the KubeletConfig can match the label with MachineConfigPool . Important Do not set protectKernelDefaults: false in the KubeletConfig file, because the MachineConfigPool object might fail to unpause unexpectedly after the Compliance Operator finishes applying remediation. Procedure List the nodes. USD oc get nodes -n openshift-compliance Example output NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.27.3 ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.27.3 ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.27.3 ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.27.3 ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.27.3 Add a label to nodes. USD oc -n openshift-compliance \ label node ip-10-0-166-81.us-east-2.compute.internal \ node-role.kubernetes.io/<machine_config_pool_name>= Example output node/ip-10-0-166-81.us-east-2.compute.internal labeled Create custom MachineConfigPool CR. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: "" 1 The labels field defines label name to add for Machine config pool(MCP). Verify MCP created successfully. USD oc get mcp -w 5.6.5.4. Evaluating KubeletConfig rules against default configuration values OpenShift Container Platform infrastructure might contain incomplete configuration files at run time, and nodes assume default configuration values for missing configuration options. Some configuration options can be passed as command line arguments. As a result, the Compliance Operator cannot verify if the configuration file on the node is complete because it might be missing options used in the rule checks. To prevent false negative results where the default configuration value passes a check, the Compliance Operator uses the Node/Proxy API to fetch the configuration for each node in a node pool, then all configuration options that are consistent across nodes in the node pool are stored in a file that represents the configuration for all nodes within that node pool. This increases the accuracy of the scan results. No additional configuration changes are required to use this feature with default master and worker node pools configurations. 5.6.5.5. Scanning custom node pools The Compliance Operator does not maintain a copy of each node pool configuration. The Compliance Operator aggregates consistent configuration options for all nodes within a single node pool into one copy of the configuration file. The Compliance Operator then uses the configuration file for a particular node pool to evaluate rules against nodes within that pool. Procedure Add the example role to the ScanSetting object that will be stored in the ScanSettingBinding CR: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master - example scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' Create a scan that uses the ScanSettingBinding CR: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default Verification The Platform KubeletConfig rules are checked through the Node/Proxy object. You can find those rules by running the following command: USD oc get rules -o json | jq '.items[] | select(.checkType == "Platform") | select(.metadata.name | contains("ocp4-kubelet-")) | .metadata.name' 5.6.5.6. Remediating KubeletConfig sub pools KubeletConfig remediation labels can be applied to MachineConfigPool sub-pools. Procedure Add a label to the sub-pool MachineConfigPool CR: USD oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>= 5.6.5.7. Applying a remediation The boolean attribute spec.apply controls whether the remediation should be applied by the Compliance Operator. You can apply the remediation by setting the attribute to true : USD oc -n openshift-compliance \ patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects \ --patch '{"spec":{"apply":true}}' --type=merge After the Compliance Operator processes the applied remediation, the status.ApplicationState attribute would change to Applied or to Error if incorrect. When a machine config remediation is applied, that remediation along with all other applied remediations are rendered into a MachineConfig object named 75-USDscan-name-USDsuite-name . That MachineConfig object is subsequently rendered by the Machine Config Operator and finally applied to all the nodes in a machine config pool by an instance of the machine control daemon running on each node. Note that when the Machine Config Operator applies a new MachineConfig object to nodes in a pool, all the nodes belonging to the pool are rebooted. This might be inconvenient when applying multiple remediations, each of which re-renders the composite 75-USDscan-name-USDsuite-name MachineConfig object. To prevent applying the remediation immediately, you can pause the machine config pool by setting the .spec.paused attribute of a MachineConfigPool object to true . The Compliance Operator can apply remediations automatically. Set autoApplyRemediations: true in the ScanSetting top-level object. Warning Applying remediations automatically should only be done with careful consideration. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.8. Remediating a platform check manually Checks for Platform scans typically have to be remediated manually by the administrator for two reasons: It is not always possible to automatically determine the value that must be set. One of the checks requires that a list of allowed registries is provided, but the scanner has no way of knowing which registries the organization wants to allow. Different checks modify different API objects, requiring automated remediation to possess root or superuser access to modify objects in the cluster, which is not advised. Procedure The example below uses the ocp4-ocp-allowed-registries-for-import rule, which would fail on a default OpenShift Container Platform installation. Inspect the rule oc get rule.compliance/ocp4-ocp-allowed-registries-for-import -oyaml , the rule is to limit the registries the users are allowed to import images from by setting the allowedRegistriesForImport attribute, The warning attribute of the rule also shows the API object checked, so it can be modified and remediate the issue: USD oc edit image.config.openshift.io/cluster Example output apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2020-09-10T10:12:54Z" generation: 2 name: cluster resourceVersion: "363096" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 Re-run the scan: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= 5.6.5.9. Updating remediations When a new version of compliance content is used, it might deliver a new and different version of a remediation than the version. The Compliance Operator will keep the old version of the remediation applied. The OpenShift Container Platform administrator is also notified of the new version to review and apply. A ComplianceRemediation object that had been applied earlier, but was updated changes its status to Outdated . The outdated objects are labeled so that they can be searched for easily. The previously applied remediation contents would then be stored in the spec.outdated attribute of a ComplianceRemediation object and the new updated contents would be stored in the spec.current attribute. After updating the content to a newer version, the administrator then needs to review the remediation. As long as the spec.outdated attribute exists, it would be used to render the resulting MachineConfig object. After the spec.outdated attribute is removed, the Compliance Operator re-renders the resulting MachineConfig object, which causes the Operator to push the configuration to the nodes. Procedure Search for any outdated remediations: USD oc -n openshift-compliance get complianceremediations \ -l complianceoperator.openshift.io/outdated-remediation= Example output NAME STATE workers-scan-no-empty-passwords Outdated The currently applied remediation is stored in the Outdated attribute and the new, unapplied remediation is stored in the Current attribute. If you are satisfied with the new version, remove the Outdated field. If you want to keep the updated content, remove the Current and Outdated attributes. Apply the newer version of the remediation: USD oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords \ --type json -p '[{"op":"remove", "path":/spec/outdated}]' The remediation state will switch from Outdated to Applied : USD oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords Example output NAME STATE workers-scan-no-empty-passwords Applied The nodes will apply the newer remediation version and reboot. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.10. Unapplying a remediation It might be required to unapply a remediation that was previously applied. Procedure Set the apply flag to false : USD oc -n openshift-compliance \ patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects \ --patch '{"spec":{"apply":false}}' --type=merge The remediation status will change to NotApplied and the composite MachineConfig object would be re-rendered to not include the remediation. Important All affected nodes with the remediation will be rebooted. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.11. Removing a KubeletConfig remediation KubeletConfig remediations are included in node-level profiles. In order to remove a KubeletConfig remediation, you must manually remove it from the KubeletConfig objects. This example demonstrates how to remove the compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation. Procedure Locate the scan-name and compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation: USD oc -n openshift-compliance get remediation \ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: "2022-01-05T19:52:27Z" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: "84820" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied 1 The scan name of the remediation. 2 The remediation that was added to the KubeletConfig objects. Note If the remediation invokes an evictionHard kubelet configuration, you must specify all of the evictionHard parameters: memory.available , nodefs.available , nodefs.inodesFree , imagefs.available , and imagefs.inodesFree . If you do not specify all parameters, only the specified parameters are applied and the remediation will not function properly. Remove the remediation: Set apply to false for the remediation object: USD oc -n openshift-compliance patch \ complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available \ -p '{"spec":{"apply":false}}' --type=merge Using the scan-name , find the KubeletConfig object that the remediation was applied to: USD oc -n openshift-compliance get kubeletconfig \ --selector compliance.openshift.io/scan-name=one-rule-tp-node-master Example output NAME AGE compliance-operator-kubelet-master 2m34s Manually remove the remediation, imagefs.available: 10% , from the KubeletConfig object: USD oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master Important All affected nodes with the remediation will be rebooted. Note You must also exclude the rule from any scheduled scans in your tailored profiles that auto-applies the remediation, otherwise, the remediation will be re-applied during the scheduled scan. 5.6.5.12. Inconsistent ComplianceScan The ScanSetting object lists the node roles that the compliance scans generated from the ScanSetting or ScanSettingBinding objects would scan. Each node role usually maps to a machine config pool. Important It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a pool should be identical. If some of the results are different from others, the Compliance Operator flags a ComplianceCheckResult object where some of the nodes will report as INCONSISTENT . All ComplianceCheckResult objects are also labeled with compliance.openshift.io/inconsistent-check . Because the number of machines in a pool might be quite large, the Compliance Operator attempts to find the most common state and list the nodes that differ from the common state. The most common state is stored in the compliance.openshift.io/most-common-status annotation and the annotation compliance.openshift.io/inconsistent-source contains pairs of hostname:status of check statuses that differ from the most common status. If no common state can be found, all the hostname:status pairs are listed in the compliance.openshift.io/inconsistent-source annotation . If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The compliance scan must be re-run to get a consistent result by annotating the scan with the compliance.openshift.io/rescan= option: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= 5.6.5.13. Additional resources Modifying nodes . 5.6.6. Performing advanced Compliance Operator tasks The Compliance Operator includes options for advanced users for the purpose of debugging or integration with existing tooling. 5.6.6.1. Using the ComplianceSuite and ComplianceScan objects directly While it is recommended that users take advantage of the ScanSetting and ScanSettingBinding objects to define the suites and scans, there are valid use cases to define the ComplianceSuite objects directly: Specifying only a single rule to scan. This can be useful for debugging together with the debug: true attribute which increases the OpenSCAP scanner verbosity, as the debug mode tends to get quite verbose otherwise. Limiting the test to one rule helps to lower the amount of debug information. Providing a custom nodeSelector. In order for a remediation to be applicable, the nodeSelector must match a pool. Pointing the Scan to a bespoke config map with a tailoring file. For testing or development when the overhead of parsing profiles from bundles is not required. The following example shows a ComplianceSuite that scans the worker machines with only a single rule: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: "" The ComplianceSuite object and the ComplianceScan objects referred to above specify several attributes in a format that OpenSCAP expects. To find out the profile, content, or rule values, you can start by creating a similar Suite from ScanSetting and ScanSettingBinding or inspect the objects parsed from the ProfileBundle objects like rules or profiles. Those objects contain the xccdf_org identifiers you can use to refer to them from a ComplianceSuite . 5.6.6.2. Setting PriorityClass for ScanSetting scans In large scale environments, the default PriorityClass object can be too low to guarantee Pods execute scans on time. For clusters that must maintain compliance or guarantee automated scanning, it is recommended to set the PriorityClass variable to ensure the Compliance Operator is always given priority in resource constrained situations. Procedure Set the PriorityClass variable: apiVersion: compliance.openshift.io/v1alpha1 strictNodeScan: true metadata: name: default namespace: openshift-compliance priorityClass: compliance-high-priority 1 kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker scanTolerations: - operator: Exists 1 If the PriorityClass referenced in the ScanSetting cannot be found, the Operator will leave the PriorityClass empty, issue a warning, and continue scheduling scans without a PriorityClass . 5.6.6.3. Using raw tailored profiles While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it. The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map which must contain a key called tailoring.xml and the value of this key is the tailoring contents. Procedure Create the ConfigMap object from a file: USD oc -n openshift-compliance \ create configmap nist-moderate-modified \ --from-file=tailoring.xml=/path/to/the/tailoringFile.xml Reference the tailoring file in a scan that belongs to a suite: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... debug: true tailoringConfigMap: name: nist-moderate-modified nodeSelector: node-role.kubernetes.io/worker: "" 5.6.6.4. Performing a rescan Typically you will want to re-run a scan on a defined schedule, like every Monday or daily. It can also be useful to re-run a scan once after fixing a problem on a node. To perform a single scan, annotate the scan with the compliance.openshift.io/rescan= option: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= A rescan generates four additional mc for rhcos-moderate profile: USD oc get mc Example output 75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub Important When the scan setting default-auto-apply label is applied, remediations are applied automatically and outdated remediations automatically update. If there are remediations that were not applied due to dependencies, or remediations that had been outdated, rescanning applies the remediations and might trigger a reboot. Only remediations that use MachineConfig objects trigger reboots. If there are no updates or dependencies to be applied, no reboot occurs. 5.6.6.5. Setting custom storage size for results While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size attribute that is exposed in both the ScanSetting and ComplianceScan resources. A related parameter is rawResultStorage.rotation which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment. 5.6.6.5.1. Using custom result storage values Because OpenShift Container Platform can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName attribute. Important If your cluster does not specify a default storage class, this attribute must be set. Configure the ScanSetting custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results: Example ScanSetting CR apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' 5.6.6.6. Applying remediations generated by suite scans Although you can use the autoApplyRemediations boolean parameter in a ComplianceSuite object, you can alternatively annotate the object with compliance.openshift.io/apply-remediations . This allows the Operator to apply all of the created remediations. Procedure Apply the compliance.openshift.io/apply-remediations annotation by running: USD oc -n openshift-compliance \ annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations= 5.6.6.7. Automatically update remediations In some cases, a scan with newer content might mark remediations as OUTDATED . As an administrator, you can apply the compliance.openshift.io/remove-outdated annotation to apply new remediations and remove the outdated ones. Procedure Apply the compliance.openshift.io/remove-outdated annotation: USD oc -n openshift-compliance \ annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated= Alternatively, set the autoUpdateRemediations flag in a ScanSetting or ComplianceSuite object to update the remediations automatically. 5.6.6.8. Creating a custom SCC for the Compliance Operator In some environments, you must create a custom Security Context Constraints (SCC) file to ensure the correct permissions are available to the Compliance Operator api-resource-collector . Prerequisites You must have admin privileges. Procedure Define the SCC in a YAML file named restricted-adjusted-compliance.yaml : SecurityContextConstraints object definition allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret 1 The priority of this SCC must be higher than any other SCC that applies to the system:authenticated group. 2 Service Account used by Compliance Operator Scanner pod. Create the SCC: USD oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml Example output securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created Verification Verify the SCC was created: USD oc get -n openshift-compliance scc restricted-adjusted-compliance Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] 5.6.6.9. Additional resources Managing security context constraints 5.6.7. Troubleshooting Compliance Operator scans This section describes how to troubleshoot the Compliance Operator. The information can be useful either to diagnose a problem or provide information in a bug report. Some general tips: The Compliance Operator emits Kubernetes events when something important happens. You can either view all events in the cluster using the command: USD oc get events -n openshift-compliance Or view events for an object like a scan using the command: USD oc describe -n openshift-compliance compliancescan/cis-compliance The Compliance Operator consists of several controllers, approximately one per API object. It could be useful to filter only those controllers that correspond to the API object having issues. If a ComplianceRemediation cannot be applied, view the messages from the remediationctrl controller. You can filter the messages from a single controller by parsing with jq : USD oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f \ | jq -c 'select(.logger == "profilebundlectrl")' The timestamps are logged as seconds since UNIX epoch in UTC. To convert them to a human-readable date, use date -d @timestamp --utc , for example: USD date -d @1596184628.955853 --utc Many custom resources, most importantly ComplianceSuite and ScanSetting , allow the debug option to be set. Enabling this option increases verbosity of the OpenSCAP scanner pods, as well as some other helper pods. If a single rule is passing or failing unexpectedly, it could be helpful to run a single scan or a suite with only that rule to find the rule ID from the corresponding ComplianceCheckResult object and use it as the rule attribute value in a Scan CR. Then, together with the debug option enabled, the scanner container logs in the scanner pod would show the raw OpenSCAP logs. 5.6.7.1. Anatomy of a scan The following sections outline the components and stages of Compliance Operator scans. 5.6.7.1.1. Compliance sources The compliance content is stored in Profile objects that are generated from a ProfileBundle object. The Compliance Operator creates a ProfileBundle object for the cluster and another for the cluster nodes. USD oc get -n openshift-compliance profilebundle.compliance USD oc get -n openshift-compliance profile.compliance The ProfileBundle objects are processed by deployments labeled with the Bundle name. To troubleshoot an issue with the Bundle , you can find the deployment and view logs of the pods in a deployment: USD oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser USD oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4 USD oc logs -n openshift-compliance pods/<pod-name> USD oc describe -n openshift-compliance pod/<pod-name> -c profileparser 5.6.7.1.2. The ScanSetting and ScanSettingBinding objects lifecycle and debugging With valid compliance content sources, the high-level ScanSetting and ScanSettingBinding objects can be used to generate ComplianceSuite and ComplianceScan objects: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true # For each role, a separate scan will be created pointing # to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 Both ScanSetting and ScanSettingBinding objects are handled by the same controller tagged with logger=scansettingbindingctrl . These objects have no status. Any issues are communicated in form of events: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created Now a ComplianceSuite object is created. The flow continues to reconcile the newly created ComplianceSuite . 5.6.7.1.3. ComplianceSuite custom resource lifecycle and debugging The ComplianceSuite CR is a wrapper around ComplianceScan CRs. The ComplianceSuite CR is handled by controller tagged with logger=suitectrl . This controller handles creating scans from a suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a suite is set to execute periodically, the suitectrl also handles creating a CronJob CR that re-runs the scans in the suite after the initial run is done: USD oc get cronjobs Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m For the most important issues, events are emitted. View them with oc describe compliancesuites/<name> . The Suite objects also have a Status subresource that is updated when any of Scan objects that belong to this suite update their Status subresource. After all expected scans are created, control is passed to the scan controller. 5.6.7.1.4. ComplianceScan custom resource lifecycle and debugging The ComplianceScan CRs are handled by the scanctrl controller. This is also where the actual scans happen and the scan results are created. Each scan goes through several phases: 5.6.7.1.4.1. Pending phase The scan is validated for correctness in this phase. If some parameters like storage size are invalid, the scan transitions to DONE with ERROR result, otherwise proceeds to the Launching phase. 5.6.7.1.4.2. Launching phase In this phase, several config maps that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the config maps: USD oc -n openshift-compliance get cm \ -l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script= These config maps will be used by the scanner pods. If you ever needed to modify the scanner behavior, change the scanner debug level or print the raw results, modifying the config maps is the way to go. Afterwards, a persistent volume claim is created per scan to store the raw ARF results: USD oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker The PVCs are mounted by a per-scan ResultServer deployment. A ResultServer is a simple HTTP server where the individual scanner pods upload the full ARF results to. Each server can run on a different node. The full ARF results might be very large and you cannot presume that it would be possible to create a volume that could be mounted from multiple nodes at the same time. After the scan is finished, the ResultServer deployment is scaled down. The PVC with the raw results can be mounted from another custom pod and the results can be fetched or inspected. The traffic between the scanner pods and the ResultServer is protected by mutual TLS protocols. Finally, the scanner pods are launched in this phase; one scanner pod for a Platform scan instance and one scanner pod per matching node for a node scan instance. The per-node pods are labeled with the node name. Each pod is always labeled with the ComplianceScan name: USD oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels Example output NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner + The scan then proceeds to the Running phase. 5.6.7.1.4.3. Running phase The running phase waits until the scanner pods finish. The following terms and processes are in use in the running phase: init container : There is one init container called content-container . It runs the contentImage container and executes a single command that copies the contentFile to the /content directory shared with the other containers in this pod. scanner : This container runs the scan. For node scans, the container mounts the node filesystem as /host and mounts the content delivered by the init container. The container also mounts the entrypoint ConfigMap created in the Launching phase and executes it. The default script in the entrypoint ConfigMap executes OpenSCAP and stores the result files in the /results directory shared between the pod's containers. Logs from this pod can be viewed to determine what the OpenSCAP scanner checked. More verbose output can be viewed with the debug flag. logcollector : The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the ResultServer and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a ConfigMap. These result config maps are labeled with the scan name ( compliance.openshift.io/scan-name=rhcos4-e8-worker ): USD oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Example output Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version="1.0" encoding="UTF-8"?> ... Scanner pods for Platform scans are similar, except: There is one extra init container called api-resource-collector that reads the OpenSCAP content provided by the content-container init, container, figures out which API resources the content needs to examine and stores those API resources to a shared directory where the scanner container would read them from. The scanner container does not need to mount the host file system. When the scanner pods are done, the scans move on to the Aggregating phase. 5.6.7.1.4.4. Aggregating phase In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result ConfigMap objects, read the results and for each check result create the corresponding Kubernetes object. If the check failure can be automatically remediated, a ComplianceRemediation object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container. When a config map is processed by an aggregator pod, it is labeled the compliance-remediations/processed label. The result of this phase are ComplianceCheckResult objects: USD oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker Example output NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium and ComplianceRemediation objects: USD oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker Example output NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied After these CRs are created, the aggregator pod exits and the scan moves on to the Done phase. 5.6.7.1.4.5. Done phase In the final scan phase, the scan resources are cleaned up if needed and the ResultServer deployment is either scaled down (if the scan was one-time) or deleted if the scan is continuous; the scan instance would then recreate the deployment again. It is also possible to trigger a re-run of a scan in the Done phase by annotating it: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with autoApplyRemediations: true . The OpenShift Container Platform administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the ComplianceSuite controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the ComplianceRemediation controller takes over. 5.6.7.1.5. ComplianceRemediation controller lifecycle and debugging The example scan has reported some findings. One of the remediations can be enabled by toggling its apply attribute to true : USD oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{"spec":{"apply":true}}' --type=merge The ComplianceRemediation controller ( logger=remediationctrl ) reconciles the modified object. The result of the reconciliation is change of status of the remediation object that is reconciled, but also a change of the rendered per-suite MachineConfig object that contains all the applied remediations. The MachineConfig object always begins with 75- and is named after the scan and the suite: USD oc get mc | grep 75- Example output 75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s The remediations the mc currently consists of are listed in the machine config's annotations: USD oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements Example output Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod: The ComplianceRemediation controller's algorithm works like this: All currently applied remediations are read into an initial remediation set. If the reconciled remediation is supposed to be applied, it is added to the set. A MachineConfig object is rendered from the set and annotated with names of remediations in the set. If the set is empty (the last remediation was unapplied), the rendered MachineConfig object is removed. If and only if the rendered machine config is different from the one already applied in the cluster, the applied MC is updated (or created, or deleted). Creating or modifying a MachineConfig object triggers a reboot of nodes that match the machineconfiguration.openshift.io/role label - see the Machine Config Operator documentation for more details. The remediation loop ends once the rendered machine config is updated, if needed, and the reconciled remediation object status is updated. In our case, applying the remediation would trigger a reboot. After the reboot, annotate the scan to re-run it: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= The scan will run and finish. Check for the remediation to pass: USD oc -n openshift-compliance \ get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod Example output NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium 5.6.7.1.6. Useful labels Each pod that is spawned by the Compliance Operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the compliance.openshift.io/scan-name label. The workload identifier is labeled with the workload label. The Compliance Operator schedules the following workloads: scanner : Performs the compliance scan. resultserver : Stores the raw results for the compliance scan. aggregator : Aggregates the results, detects inconsistencies and outputs result objects (checkresults and remediations). suitererunner : Will tag a suite to be re-run (when a schedule is set). profileparser : Parses a datastream and creates the appropriate profiles, rules and variables. When debugging and logs are required for a certain workload, run: USD oc logs -l workload=<workload_name> -c <container_name> 5.6.7.2. Increasing Compliance Operator resource limits In some cases, the Compliance Operator might require more memory than the default limits allow. The best way to mitigate this issue is to set custom resource limits. To increase the default memory and CPU limits of scanner pods, see `ScanSetting` Custom resource . Procedure To increase the Operator's memory limits to 500 Mi, create the following patch file named co-memlimit-patch.yaml : spec: config: resources: limits: memory: 500Mi Apply the patch file: USD oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge 5.6.7.3. Configuring Operator resource constraints The resources field defines Resource Constraints for all the containers in the Pod created by the Operator Lifecycle Manager (OLM). Note Resource Constraints applied in this process overwrites the existing resource constraints. Procedure Inject a request of 0.25 cpu and 64 Mi of memory, and a limit of 0.5 cpu and 128 Mi of memory in each container by editing the Subscription object: kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: package: package-name channel: stable config: resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" 5.6.7.4. Configuring ScanSetting resources When using the Compliance Operator in a cluster that contains more than 500 MachineConfigs, the ocp4-pci-dss-api-checks-pod pod may pause in the init phase when performing a Platform scan. Note Resource constraints applied in this process overwrites the existing resource constraints. Procedure Confirm the ocp4-pci-dss-api-checks-pod pod is stuck in the Init:OOMKilled status: USD oc get pod ocp4-pci-dss-api-checks-pod -w Example output NAME READY STATUS RESTARTS AGE ocp4-pci-dss-api-checks-pod 0/2 Init:1/2 8 (5m56s ago) 25m ocp4-pci-dss-api-checks-pod 0/2 Init:OOMKilled 8 (6m19s ago) 26m Edit the scanLimits attribute in the ScanSetting CR to increase the available memory for the ocp4-pci-dss-api-checks-pod pod: timeout: 30m strictNodeScan: true metadata: name: default namespace: openshift-compliance kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker apiVersion: compliance.openshift.io/v1alpha1 maxRetryOnTimeout: 3 scanTolerations: - operator: Exists scanLimits: memory: 1024Mi 1 1 The default setting is 500Mi . Apply the ScanSetting CR to your cluster: USD oc apply -f scansetting.yaml 5.6.7.5. Configuring ScanSetting timeout The ScanSetting object has a timeout option that can be specified in the ComplianceScanSetting object as a duration string, such as 1h30m . If the scan does not finish within the specified timeout, the scan reattempts until the maxRetryOnTimeout limit is reached. Procedure To set a timeout and maxRetryOnTimeout in ScanSetting, modify an existing ScanSetting object: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' timeout: '10m0s' 1 maxRetryOnTimeout: 3 2 1 The timeout variable is defined as a duration string, such as 1h30m . The default value is 30m . To disable the timeout, set the value to 0s . 2 The maxRetryOnTimeout variable defines how many times a retry is attempted. The default value is 3 . 5.6.7.6. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 5.6.8. Using the oc-compliance plugin Although the Compliance Operator automates many of the checks and remediations for the cluster, the full process of bringing a cluster into compliance often requires administrator interaction with the Compliance Operator API and other components. The oc-compliance plugin makes the process easier. 5.6.8.1. Installing the oc-compliance plugin Procedure Extract the oc-compliance image to get the oc-compliance binary: USD podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/ Example output W0611 20:35:46.486903 11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list. You can now run oc-compliance . 5.6.8.2. Fetching raw results When a compliance scan finishes, the results of the individual checks are listed in the resulting ComplianceCheckResult custom resource (CR). However, an administrator or auditor might require the complete details of the scan. The OpenSCAP tool creates an Advanced Recording Format (ARF) formatted file with the detailed results. This ARF file is too large to store in a config map or other standard Kubernetes resource, so a persistent volume (PV) is created to contain it. Procedure Fetching the results from the PV with the Compliance Operator is a four-step process. However, with the oc-compliance plugin, you can use a single command: USD oc compliance fetch-raw <object-type> <object-name> -o <output-path> <object-type> can be either scansettingbinding , compliancescan or compliancesuite , depending on which of these objects the scans were launched with. <object-name> is the name of the binding, suite, or scan object to gather the ARF file for, and <output-path> is the local directory to place the results. For example: USD oc compliance fetch-raw scansettingbindings my-binding -o /tmp/ Example output Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Fetching raw compliance results for scan 'ocp4-cis'....... The raw compliance results are available in the following directory: /tmp/ocp4-cis Fetching raw compliance results for scan 'ocp4-cis-node-worker'........... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker Fetching raw compliance results for scan 'ocp4-cis-node-master'...... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master View the list of files in the directory: USD ls /tmp/ocp4-cis-node-master/ Example output ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2 Extract the results: USD bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml View the results: USD ls resultsdir/worker-scan/ Example output worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2 5.6.8.3. Re-running scans Although it is possible to run scans as scheduled jobs, you must often re-run a scan on demand, particularly after remediations are applied or when other changes to the cluster are made. Procedure Rerunning a scan with the Compliance Operator requires use of an annotation on the scan object. However, with the oc-compliance plugin you can rerun a scan with a single command. Enter the following command to rerun the scans for the ScanSettingBinding object named my-binding : USD oc compliance rerun-now scansettingbindings my-binding Example output Rerunning scans from 'my-binding': ocp4-cis Re-running scan 'openshift-compliance/ocp4-cis' 5.6.8.4. Using ScanSettingBinding custom resources When using the ScanSetting and ScanSettingBinding custom resources (CRs) that the Compliance Operator provides, it is possible to run scans for multiple profiles while using a common set of scan options, such as schedule , machine roles , tolerations , and so on. While that is easier than working with multiple ComplianceSuite or ComplianceScan objects, it can confuse new users. The oc compliance bind subcommand helps you create a ScanSettingBinding CR. Procedure Run: USD oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>] If you omit the -S flag, the default scan setting provided by the Compliance Operator is used. The object type is the Kubernetes object type, which can be profile or tailoredprofile . More than one object can be provided. The object name is the name of the Kubernetes resource, such as .metadata.name . Add the --dry-run option to display the YAML file of the objects that are created. For example, given the following profiles and scan settings: USD oc get profile.compliance -n openshift-compliance Example output NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1 USD oc get scansettings -n openshift-compliance Example output NAME AGE default 10m default-auto-apply 10m To apply the default settings to the ocp4-cis and ocp4-cis-node profiles, run: USD oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node Example output Creating ScanSettingBinding my-binding After the ScanSettingBinding CR is created, the bound profile begins scanning for both profiles with the related settings. Overall, this is the fastest way to begin scanning with the Compliance Operator. 5.6.8.5. Printing controls Compliance standards are generally organized into a hierarchy as follows: A benchmark is the top-level definition of a set of controls for a particular standard. For example, FedRAMP Moderate or Center for Internet Security (CIS) v.1.6.0. A control describes a family of requirements that must be met in order to be in compliance with the benchmark. For example, FedRAMP AC-01 (access control policy and procedures). A rule is a single check that is specific for the system being brought into compliance, and one or more of these rules map to a control. The Compliance Operator handles the grouping of rules into a profile for a single benchmark. It can be difficult to determine which controls that the set of rules in a profile satisfy. Procedure The oc compliance controls subcommand provides a report of the standards and controls that a given profile satisfies: USD oc compliance controls profile ocp4-cis-node Example output +-----------+----------+ | FRAMEWORK | CONTROLS | +-----------+----------+ | CIS-OCP | 1.1.1 | + +----------+ | | 1.1.10 | + +----------+ | | 1.1.11 | + +----------+ ... 5.6.8.6. Fetching compliance remediation details The Compliance Operator provides remediation objects that are used to automate the changes required to make the cluster compliant. The fetch-fixes subcommand can help you understand exactly which configuration remediations are used. Use the fetch-fixes subcommand to extract the remediation objects from a profile, rule, or ComplianceRemediation object into a directory to inspect. Procedure View the remediations for a profile: USD oc compliance fetch-fixes profile ocp4-cis -o /tmp Example output No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1 No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled' No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup' Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml No fixes to persist for rule 'ocp4-api-server-audit-log-path' No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa' No fixes to persist for rule 'ocp4-api-server-auth-mode-node' No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac' No fixes to persist for rule 'ocp4-api-server-basic-auth' No fixes to persist for rule 'ocp4-api-server-bind-address' No fixes to persist for rule 'ocp4-api-server-client-ca' Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml 1 The No fixes to persist warning is expected whenever there are rules in a profile that do not have a corresponding remediation, because either the rule cannot be remediated automatically or a remediation was not provided. You can view a sample of the YAML file. The head command will show you the first 10 lines: USD head /tmp/ocp4-api-server-audit-log-maxsize.yaml Example output apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: maximumFileSizeMegabytes: 100 View the remediation from a ComplianceRemediation object created after a scan: USD oc get complianceremediations -n openshift-compliance Example output NAME STATE ocp4-cis-api-server-encryption-provider-cipher NotApplied ocp4-cis-api-server-encryption-provider-config NotApplied USD oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp Example output Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml You can view a sample of the YAML file. The head command will show you the first 10 lines: USD head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml Example output apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: encryption: type: aescbc Warning Use caution before applying remediations directly. Some remediations might not be applicable in bulk, such as the usbguard rules in the moderate profile. In these cases, allow the Compliance Operator to apply the rules because it addresses the dependencies and ensures that the cluster remains in a good state. 5.6.8.7. Viewing ComplianceCheckResult object details When scans are finished running, ComplianceCheckResult objects are created for the individual scan rules. The view-result subcommand provides a human-readable output of the ComplianceCheckResult object details. Procedure Run: USD oc compliance view-result ocp4-cis-scheduler-no-bind-address
[ "oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis", "oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis", "oc adm must-gather --image=USD(oc get csv compliance-operator.v1.6.0 -o=jsonpath='{.spec.relatedImages[?(@.name==\"must-gather\")].image}')", "oc get profile.compliance -n openshift-compliance", "NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1", "oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8", "apiVersion: compliance.openshift.io/v1alpha1 description: 'This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers' id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node creationTimestamp: \"2022-10-19T12:06:49Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"43699\" uid: 86353f70-28f7-40b4-bf0e-6289ec33675b rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight", "oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events", "apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS creationTimestamp: \"2022-10-19T12:07:08Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"44819\" uid: 75872f1f-3c93-40ca-a69d-44e5438824a4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion.", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance status: dataStreamStatus: VALID 1", "apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: \"YYYY-MM-DDTMM:HH:SSZ\" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: \"<version number>\" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile>", "apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule>", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4", "compliance.openshift.io/product-type: Platform/Node", "apiVersion: compliance.openshift.io/v1alpha1 autoApplyRemediations: true 1 autoUpdateRemediations: true 2 kind: ScanSetting maxRetryOnTimeout: 3 metadata: creationTimestamp: \"2022-10-18T20:21:00Z\" generation: 1 name: default-auto-apply namespace: openshift-compliance resourceVersion: \"38840\" uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 rawResultStorage: nodeSelector: node-role.kubernetes.io/master: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 3 size: 1Gi 4 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: 5 - master - worker scanTolerations: - operator: Exists schedule: 0 1 * * * 6 showNotApplicable: false strictNodeScan: true timeout: 30m", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "oc get compliancesuites", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name of the scan> spec: autoApplyRemediations: false 1 schedule: \"0 1 * * *\" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" nodeSelector: node-role.kubernetes.io/worker: \"\" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT", "oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name of the scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3 rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" 4 nodeSelector: 5 node-role.kubernetes.io/worker: \"\" status: phase: DONE 6 result: NON-COMPLIANT 7", "get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name of the suite>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2", "get compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3", "get complianceremediations -l compliance.openshift.io/suite=workers-compliancesuite", "get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation'", "get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation'", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" 1", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" env: - name: PLATFORM value: \"HyperShift\"", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml 1 contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID", "oc -n openshift-compliance get profilebundles rhcos4 -oyaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID", "oc delete ssb --all -n openshift-compliance", "oc delete ss --all -n openshift-compliance", "oc delete suite --all -n openshift-compliance", "oc delete scan --all -n openshift-compliance", "oc delete profilebundle.compliance --all -n openshift-compliance", "oc delete sub --all -n openshift-compliance", "oc delete csv --all -n openshift-compliance", "oc delete project openshift-compliance", "project.project.openshift.io \"openshift-compliance\" deleted", "oc get project/openshift-compliance", "Error from server (NotFound): namespaces \"openshift-compliance\" not found", "oc explain scansettings", "oc explain scansettingbindings", "oc describe scansettings default -n openshift-compliance", "Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Max Retry On Timeout: 3 Metadata: Creation Timestamp: 2024-07-16T14:56:42Z Generation: 2 Resource Version: 91655682 UID: 50358cf1-57a8-4f69-ac50-5c7a5938e402 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce 1 Rotation: 3 2 Size: 1Gi 3 Storage Class Name: standard 4 Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master 5 worker 6 Scan Tolerations: 7 Operator: Exists Schedule: 0 1 * * * 8 Show Not Applicable: false Strict Node Scan: true Suspend: false Timeout: 30m Events: <none>", "Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true 1 Auto Update Remediations: true 2 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: f:autoUpdateRemediations: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "oc create -f <file-name>.yaml -n openshift-compliance", "oc get compliancescan -w -n openshift-compliance", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * *", "oc create -f rs-workers.yaml", "oc get scansettings rs-on-workers -n openshift-compliance -o yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: \"2021-11-19T19:36:36Z\" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: \"48305\" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true", "oc get hostedcluster -A", "NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE local-cluster 79136a1bdb84b3c13217 4.13.5 79136a1bdb84b3c13217-admin-kubeconfig Completed True False The hosted control plane is available", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: hypershift-cisk57aw88gry namespace: openshift-compliance spec: description: This profile test required rules extends: ocp4-cis 1 title: Management namespace profile setValues: - name: ocp4-hypershift-cluster rationale: This value is used for HyperShift version detection value: 79136a1bdb84b3c13217 2 - name: ocp4-hypershift-namespace-prefix rationale: This value is used for HyperShift control plane namespace detection value: local-cluster 3", "oc create -n openshift-compliance -f mgmt-tp.yaml", "spec.containers[].resources.limits.cpu spec.containers[].resources.limits.memory spec.containers[].resources.limits.hugepages-<size> spec.containers[].resources.requests.cpu spec.containers[].resources.requests.memory spec.containers[].resources.requests.hugepages-<size>", "apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: 1 memory: \"64Mi\" cpu: \"250m\" limits: 2 memory: \"128Mi\" cpu: \"500m\" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\"", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: new-profile annotations: compliance.openshift.io/product-type: Node 1 spec: extends: ocp4-cis-node 2 description: My custom profile 3 title: Custom profile 4 enableRules: - name: ocp4-etcd-unique-ca rationale: We really need to enable this disableRules: - name: ocp4-file-groupowner-cni-conf rationale: This does not apply to the cluster", "oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4", "oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: ocp4-manual-scc-check spec: extends: ocp4-cis description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL title: OCP4 CIS profile with manual SCC check manualRules: - name: ocp4-scc-limit-container-allowed-capabilities rationale: We use third party software that installs its own SCC with extra privileges", "oc create -n openshift-compliance -f new-profile-node.yaml 1", "tailoredprofile.compliance.openshift.io/nist-moderate-modified created", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default", "oc create -n openshift-compliance -f new-scansettingbinding.yaml", "scansettingbinding.compliance.openshift.io/nist-moderate-modified created", "oc get compliancesuites nist-moderate-modified -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage'", "{ \"name\": \"ocp4-moderate\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-master\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-worker\", \"namespace\": \"openshift-compliance\" }", "oc get pvc -n openshift-compliance rhcos4-moderate-worker", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m", "oc create -n openshift-compliance -f pod.yaml", "apiVersion: \"v1\" kind: Pod metadata: name: pv-extract spec: containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi9/ubi command: [\"sleep\", \"3000\"] volumeMounts: - mountPath: \"/workers-scan-results\" name: workers-scan-vol volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker", "oc cp pv-extract:/workers-scan-results -n openshift-compliance .", "oc delete pod pv-extract -n openshift-compliance", "oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite", "oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/scan=workers-scan", "oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation'", "oc get compliancecheckresults -n openshift-compliance -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high'", "NAME STATUS SEVERITY nist-moderate-modified-master-configure-crypto-policy FAIL high nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-master-enable-fips-mode FAIL high nist-moderate-modified-master-no-empty-passwords FAIL high nist-moderate-modified-master-selinux-state FAIL high nist-moderate-modified-worker-configure-crypto-policy FAIL high nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-worker-enable-fips-mode FAIL high nist-moderate-modified-worker-no-empty-passwords FAIL high nist-moderate-modified-worker-selinux-state FAIL high ocp4-moderate-configure-network-policies-namespaces FAIL high ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high", "oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation'", "spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied", "echo \"net.ipv4.conf.all.accept_redirects%3D0\" | python3 -c \"import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))\"", "net.ipv4.conf.all.accept_redirects=0", "oc get nodes -n openshift-compliance", "NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.27.3 ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.27.3 ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.27.3 ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.27.3 ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.27.3", "oc -n openshift-compliance label node ip-10-0-166-81.us-east-2.compute.internal node-role.kubernetes.io/<machine_config_pool_name>=", "node/ip-10-0-166-81.us-east-2.compute.internal labeled", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: \"\"", "oc get mcp -w", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master - example scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default", "oc get rules -o json | jq '.items[] | select(.checkType == \"Platform\") | select(.metadata.name | contains(\"ocp4-kubelet-\")) | .metadata.name'", "oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>=", "oc -n openshift-compliance patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":true}}' --type=merge", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2020-09-10T10:12:54Z\" generation: 2 name: cluster resourceVersion: \"363096\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc -n openshift-compliance get complianceremediations -l complianceoperator.openshift.io/outdated-remediation=", "NAME STATE workers-scan-no-empty-passwords Outdated", "oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords --type json -p '[{\"op\":\"remove\", \"path\":/spec/outdated}]'", "oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords", "NAME STATE workers-scan-no-empty-passwords Applied", "oc -n openshift-compliance patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":false}}' --type=merge", "oc -n openshift-compliance get remediation \\ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: \"2022-01-05T19:52:27Z\" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: \"84820\" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied", "oc -n openshift-compliance patch complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -p '{\"spec\":{\"apply\":false}}' --type=merge", "oc -n openshift-compliance get kubeletconfig --selector compliance.openshift.io/scan-name=one-rule-tp-node-master", "NAME AGE compliance-operator-kubelet-master 2m34s", "oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: \"\"", "apiVersion: compliance.openshift.io/v1alpha1 strictNodeScan: true metadata: name: default namespace: openshift-compliance priorityClass: compliance-high-priority 1 kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker scanTolerations: - operator: Exists", "oc -n openshift-compliance create configmap nist-moderate-modified --from-file=tailoring.xml=/path/to/the/tailoringFile.xml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true tailoringConfigMap: name: nist-moderate-modified nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc get mc", "75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations=", "oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated=", "allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret", "oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml", "securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created", "oc get -n openshift-compliance scc restricted-adjusted-compliance", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]", "oc get events -n openshift-compliance", "oc describe -n openshift-compliance compliancescan/cis-compliance", "oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f | jq -c 'select(.logger == \"profilebundlectrl\")'", "date -d @1596184628.955853 --utc", "oc get -n openshift-compliance profilebundle.compliance", "oc get -n openshift-compliance profile.compliance", "oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser", "oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4", "oc logs -n openshift-compliance pods/<pod-name>", "oc describe -n openshift-compliance pod/<pod-name> -c profileparser", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true For each role, a separate scan will be created pointing to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created", "oc get cronjobs", "NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m", "oc -n openshift-compliance get cm -l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=", "oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels", "NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner", "oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod", "Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version=\"1.0\" encoding=\"UTF-8\"?>", "oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium", "oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{\"spec\":{\"apply\":true}}' --type=merge", "oc get mc | grep 75-", "75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s", "oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements", "Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc -n openshift-compliance get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod", "NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium", "oc logs -l workload=<workload_name> -c <container_name>", "spec: config: resources: limits: memory: 500Mi", "oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge", "kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: package: package-name channel: stable config: resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\"", "oc get pod ocp4-pci-dss-api-checks-pod -w", "NAME READY STATUS RESTARTS AGE ocp4-pci-dss-api-checks-pod 0/2 Init:1/2 8 (5m56s ago) 25m ocp4-pci-dss-api-checks-pod 0/2 Init:OOMKilled 8 (6m19s ago) 26m", "timeout: 30m strictNodeScan: true metadata: name: default namespace: openshift-compliance kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker apiVersion: compliance.openshift.io/v1alpha1 maxRetryOnTimeout: 3 scanTolerations: - operator: Exists scanLimits: memory: 1024Mi 1", "oc apply -f scansetting.yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' timeout: '10m0s' 1 maxRetryOnTimeout: 3 2", "podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/", "W0611 20:35:46.486903 11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list.", "oc compliance fetch-raw <object-type> <object-name> -o <output-path>", "oc compliance fetch-raw scansettingbindings my-binding -o /tmp/", "Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Fetching raw compliance results for scan 'ocp4-cis'.... The raw compliance results are available in the following directory: /tmp/ocp4-cis Fetching raw compliance results for scan 'ocp4-cis-node-worker'........ The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker Fetching raw compliance results for scan 'ocp4-cis-node-master'... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master", "ls /tmp/ocp4-cis-node-master/", "ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2", "bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml", "ls resultsdir/worker-scan/", "worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2", "oc compliance rerun-now scansettingbindings my-binding", "Rerunning scans from 'my-binding': ocp4-cis Re-running scan 'openshift-compliance/ocp4-cis'", "oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>]", "oc get profile.compliance -n openshift-compliance", "NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1", "oc get scansettings -n openshift-compliance", "NAME AGE default 10m default-auto-apply 10m", "oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node", "Creating ScanSettingBinding my-binding", "oc compliance controls profile ocp4-cis-node", "+-----------+----------+ | FRAMEWORK | CONTROLS | +-----------+----------+ | CIS-OCP | 1.1.1 | + +----------+ | | 1.1.10 | + +----------+ | | 1.1.11 | + +----------+", "oc compliance fetch-fixes profile ocp4-cis -o /tmp", "No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1 No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled' No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup' Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml No fixes to persist for rule 'ocp4-api-server-audit-log-path' No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa' No fixes to persist for rule 'ocp4-api-server-auth-mode-node' No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac' No fixes to persist for rule 'ocp4-api-server-basic-auth' No fixes to persist for rule 'ocp4-api-server-bind-address' No fixes to persist for rule 'ocp4-api-server-client-ca' Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml", "head /tmp/ocp4-api-server-audit-log-maxsize.yaml", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: maximumFileSizeMegabytes: 100", "oc get complianceremediations -n openshift-compliance", "NAME STATE ocp4-cis-api-server-encryption-provider-cipher NotApplied ocp4-cis-api-server-encryption-provider-config NotApplied", "oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp", "Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml", "head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: encryption: type: aescbc", "oc compliance view-result ocp4-cis-scheduler-no-bind-address" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_and_compliance/compliance-operator
Chapter 20. Virtualization
Chapter 20. Virtualization virt-v2v converts virtual machine CPU topology With this update, the virt-v2v utility preserves the CPU topology of the converted virtual machines (VMs). This ensures that the VM CPU works the same way after the conversion as it did before the conversion, which avoids potential runtime problems. (BZ# 1541908 ) virt-v2v can import virtual machines directly to RHV The virt-v2v utility is now able to output a converted virtual machine (VM) directly to a Red Hat Virtualization (RHV) client. As a result, importing VMs converted by virt-v2v using the Red Hat Virtualization Manager (RHVM) is now easier, faster, and more reliable. Note that this feature requires RHV version 4.2 or later to work properly. (BZ# 1557273 ) The i6300esb watchdog is now supported by libvirt With this update, the libvirt API supports the i6300esb watchdog device. As a result, KVM virtual machines can use this device to automatically trigger a specified action, such as saving a core dump of the guest if the guest OS becomes unresponsive or terminates unexpectedly. (BZ# 1447169 ) Paravirtualized clock added to Red Hat Enterprise Linux VMs With this update, the paravirtualized sched_clock() function has been integrated in the Red Hat Enterprise Linux kernel. This improves the performance of Red Hat Enterprise Linux virtual machines (VMs) running on VMWare hypervisors. Note that the function is enabled by default. To disable it, add the no-vmw-sched-clock option to the kernel command line. (BZ# 1507027 ) VNC console is supported on IBM Z This update enables the virtio-gpu kernel configuration in guests running on the IBM Z architecture. As a result, KVM guests on an IBM Z host are now able to use the VNC console to display their graphical output. (BZ#1570090) QEMU Guest Agent diagnostics enhanced To maintain qemu-guest-agents compatibility with the latest version of VDSM, a number of features have been backported from the most recent upstream version. These include the addition of qemu-get-host-name , qemu-get-users , qemu-get-osinfo , and qemu-get-timezone commands, which improve the diagnostic capabilities of QEMU Guest Agent. (BZ# 1569013 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/new_features_virtualization
22.4.2. Installing from a Hard Drive
22.4.2. Installing from a Hard Drive The Select Partition screen applies only if you are installing from a disk partition (that is, you selected Hard Drive in the Installation Method dialog). This dialog allows you to name the disk partition and directory from which you are installing Red Hat Enterprise Linux. If you used the repo=hd boot option, you already specified a partition. Figure 22.5. Selecting Partition Dialog for Hard Drive Installation Select the partition containing the ISO files from the list of available partitions. DASD names begin with /dev/dasd . Each individual drive has its own letter, for example /dev/dasda or /dev/sda . Each partition on a drive is numbered, for example /dev/dasda1 or /dev/sda1 . For an FCP LUN, you would have to either boot (IPL) from the same FCP LUN or use the rescue shell provided by the linuxrc menus to manually activate the FCP LUN holding the ISOs as described in Section 25.2.1, "Dynamically Activating an FCP LUN" . Also specify the Directory holding images . Enter the full directory path from the drive that contains the ISO image files. The following table shows some examples of how to enter this information: Table 22.1. Location of ISO images for different partition types File system Mount point Original path to files Directory to use ext2, ext3, ext4 /home /home/user1/RHEL6.9 /user1/RHEL6.9 If the ISO images are in the root (top-level) directory of a partition, enter a / . If the ISO images are located in a subdirectory of a mounted partition, enter the name of the directory holding the ISO images within that partition. For example, if the partition on which the ISO images is normally mounted as /home/ , and the images are in /home/new/ , you would enter /new/ . Important An entry without a leading slash may cause the installation to fail. Select OK to continue. Proceed with Chapter 23, Installation Phase 3: Installing Using Anaconda .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-begininstall-hd-s390
Release Notes for AMQ Streams 1.8 on OpenShift
Release Notes for AMQ Streams 1.8 on OpenShift Red Hat AMQ 2021.q3 For use with AMQ Streams on OpenShift Container Platform
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_amq_streams_1.8_on_openshift/index
17.4. Changing the Names of Subsystem Certificates
17.4. Changing the Names of Subsystem Certificates One alternative to renewing certificates is replacing them with new certificates, meaning that a new certificate is generated with new keys. Generally, a new certificate can be added to the database and the old one deleted, a simple one-to-one swap. This is possible because the individual subsystem servers identify certificates based on their nickname; as long as the certificate nickname remains the same, the server can find the required certificate even if other factors - like the subject name, serial number, or key - are different. However, in some situations, the new certificate may have a new certificate nickname, as well. In that case, the certificate nickname needs to be updated in all of the required settings in the subsystem's CS.cfg configuration file. Important Always restart a subsystem after editing the CS.cfg file. These tables list all of the configuration parameters for each of the subsystem's certificates: Table 17.3, "CA Certificate Nickname Parameters" Table 17.4, "KRA Certificate Nickname Parameters" Table 17.5, "OCSP Certificate Nickname Parameters" Table 17.6, "TKS Certificate Nickname Parameters" Table 17.7, "TPS Nickname Parameters in CS.cfg" Table 17.3. CA Certificate Nickname Parameters CA Signing Certificate ca.cert.signing.nickname ca.signing.cacertnickname ca.signing.certnickname ca.signing.nickname cloning.signing.nickname OCSP Signing Certificate ca.ocsp_signing.cacertnickname ca.ocsp_signing.certnickname ca.cert.ocsp_signing.nickname ca.ocsp_signing.nickname cloning.ocsp_signing.nickname Subsystem Certificate ca.cert.subsystem.nickname ca.subsystem.nickname cloning.subsystem.nickname pkiremove.cert.subsystem.nickname Server Certificate ca.sslserver.nickname ca.cert.sslserver.nickname Audit Signing Certificate ca.audit_signing.nickname ca.cert.audit_signing.nickname cloning.audit_signing.nickname Table 17.4. KRA Certificate Nickname Parameters Transport Certificate cloning.transport.nickname kra.cert.transport.nickname kra.transport.nickname tks.kra_transport_cert_nickname Note that this parameter is in the TKS configuration file. This needs changed in the TKS configuration if the KRA transport certificate nickname changes, even if the TKS certificates all stay the same. Storage Certificate cloning.storage.nickname kra.storage.nickname kra.cert.storage.nickname Server Certificate kra.cert.sslserver.nickname kra.sslserver.nickname Subsystem Certificate cloning.subsystem.nickname kra.cert.subsystem.nickname kra.subsystem.nickname pkiremove.cert.subsystem.nickname Audit Log Signing Certificate cloning.audit_signing.nickname kra.cert.audit_signing.nickname kra.audit_signing.nickname Table 17.5. OCSP Certificate Nickname Parameters OCSP Signing Certificate cloning.signing.nickname ocsp.signing.certnickname ocsp.signing.cacertnickname ocsp.signing.nickname Server Certificate ocsp.cert.sslserver.nickname ocsp.sslserver.nickname Subsystem Certificate cloning.subsystem.nickname ocsp.subsystem.nickname ocsp.cert.subsystem.nickname pkiremove.cert.subsystem Audit Log Signing Certificate cloning.audit_signing.nickname ocsp.audit_signing.nickname ocsp.cert.audit_signing.nickname Table 17.6. TKS Certificate Nickname Parameters KRA Transport Certificate [a] tks.kra_transport_cert_nickname Server Certificate tks.cert.sslserver.nickname tks.sslserver.nickname Subsystem Certificate cloning.subsystem.nickname tks.cert.subsystem.nickname tks.subsystem.nickname pkiremove.cert.subsystem.nickname Audit Log Signing Certificate cloning.audit_signing.nickname tks.audit_signing.nickname tks.cert.audit_signing.nickname [a] This needs changed in the TKS configuration if the KRA transport certificate nickname changes, even if the TKS certificates all stay the same. Table 17.7. TPS Nickname Parameters in CS.cfg Server Certificate tps.cert.sslserver.nickname Subsystem Certificate tps.cert.subsystem.nickname selftests.plugin.TPSValidity.nickname selftests.plugin.TPSPresence.nickname pkiremove.cert.subsystem.nickname Audit Log Signing Certificate tps.cert.audit_signing.nickname
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/replacing-certs
Configuring networking services
Configuring networking services Red Hat OpenStack Services on OpenShift 18.0 Configuring the Networking service (neutron) for managing networking traffic in a Red Hat OpenStack Services on OpenShift environment OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_networking_services/index
Chapter 1. Support policy for Red Hat build of OpenJDK
Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.15/rn-openjdk-support-policy
17.4. Configuration Examples
17.4. Configuration Examples 17.4.1. Dynamic DNS BIND allows hosts to update their records in DNS and zone files dynamically. This is used when a host computer's IP address changes frequently and the DNS record requires real-time modification. Use the /var/named/dynamic/ directory for zone files you want updated by dynamic DNS. Files created in or copied into this directory inherit Linux permissions that allow named to write to them. As such files are labeled with the named_cache_t type, SELinux allows named to write to them. If a zone file in /var/named/dynamic/ is labeled with the named_zone_t type, dynamic DNS updates may not be successful for a certain period of time as the update needs to be written to a journal first before being merged. If the zone file is labeled with the named_zone_t type when the journal attempts to be merged, an error such as the following is logged: Also, the following SELinux denial message is logged: To resolve this labeling issue, use the restorecon utility as root:
[ "named[PID]: dumping master file: rename: /var/named/dynamic/zone-name: permission denied", "setroubleshoot: SELinux is preventing named (named_t) \"unlink\" to zone-name (named_zone_t)", "~]# restorecon -R -v /var/named/dynamic" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-bind-configuration_examples
Chapter 6. Working with nodes
Chapter 6. Working with nodes 6.1. Viewing and listing the nodes in your OpenShift Container Platform cluster You can list all the nodes in your cluster to obtain information such as status, age, memory usage, and details about the nodes. When you perform node management operations, the CLI interacts with node objects that are representations of actual node hosts. The master uses the information from node objects to validate nodes with health checks. 6.1.1. About listing all the nodes in a cluster You can get detailed information on the nodes in the cluster. The following command lists all nodes: USD oc get nodes The following example is a cluster with healthy nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.23.0 node1.example.com Ready worker 7h v1.23.0 node2.example.com Ready worker 7h v1.23.0 The following example is a cluster with one unhealthy node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.23.0 node1.example.com NotReady,SchedulingDisabled worker 7h v1.23.0 node2.example.com Ready worker 7h v1.23.0 The conditions that trigger a NotReady status are shown later in this section. The -o wide option provides additional information on nodes. USD oc get nodes -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.23.0 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.23.0-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.23.0 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.23.0-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.23.0 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.23.0-30.rhaos4.10.gitf2f339d.el8-dev The following command lists information about a single node: USD oc get node <node> For example: USD oc get node node1.example.com Example output NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.23.0 The following command provides more detailed information about a specific node, including the reason for the current condition: USD oc describe node <node> For example: USD oc describe node node1.example.com Example output Name: node1.example.com 1 Roles: worker 2 Labels: beta.kubernetes.io/arch=amd64 3 beta.kubernetes.io/instance-type=m4.large beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=us-east-2 failure-domain.beta.kubernetes.io/zone=us-east-2a kubernetes.io/hostname=ip-10-0-140-16 node-role.kubernetes.io/worker= Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.16.0-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.23.0 Kube-Proxy Version: v1.23.0 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (13 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring grafana-78765ddcc7-hnjmm 100m (6%) 200m (13%) 100Mi (1%) 200Mi (2%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-t4dsn 100m (6%) 0 (0%) 300Mi (4%) 0 (0%) openshift-sdn sdn-g79hg 100m (6%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #... 1 The name of the node. 2 The role of the node, either master or worker . 3 The labels applied to the node. 4 The annotations applied to the node. 5 The taints applied to the node. 6 The node conditions and status. The conditions stanza lists the Ready , PIDPressure , PIDPressure , MemoryPressure , DiskPressure and OutOfDisk status. These condition are described later in this section. 7 The IP address and hostname of the node. 8 The pod resources and allocatable resources. 9 Information about the node host. 10 The pods on the node. 11 The events reported by the node. Among the information shown for nodes, the following node conditions appear in the output of the commands shown in this section: Table 6.1. Node Conditions Condition Description Ready If true , the node is healthy and ready to accept pods. If false , the node is not healthy and is not accepting pods. If unknown , the node controller has not received a heartbeat from the node for the node-monitor-grace-period (the default is 40 seconds). DiskPressure If true , the disk capacity is low. MemoryPressure If true , the node memory is low. PIDPressure If true , there are too many processes on the node. OutOfDisk If true , the node has insufficient free space on the node for adding new pods. NetworkUnavailable If true , the network for the node is not correctly configured. NotReady If true , one of the underlying components, such as the container runtime or network, is experiencing issues or is not yet configured. SchedulingDisabled Pods cannot be scheduled for placement on the node. 6.1.2. Listing pods on a node in your cluster You can list all the pods on a specific node. Procedure To list all or selected pods on one or more nodes: USD oc describe node <node1> <node2> For example: USD oc describe node ip-10-0-128-218.ec2.internal To list all or selected pods on selected nodes: USD oc describe --selector=<node_selector> USD oc describe node --selector=kubernetes.io/os Or: USD oc describe -l=<pod_selector> USD oc describe node -l node-role.kubernetes.io/worker To list all pods on a specific node, including terminated pods: USD oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename> 6.1.3. Viewing memory and CPU usage statistics on your nodes You can display usage statistics about nodes, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72% To view the usage statistics for nodes with labels: USD oc adm top node --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . 6.2. Working with nodes As an administrator, you can perform a number of tasks to make your clusters more efficient. 6.2.1. Understanding how to evacuate pods on nodes Evacuating pods allows you to migrate all or selected pods from a given node or nodes. You can only evacuate pods backed by a replication controller. The replication controller creates new pods on other nodes and removes the existing pods from the specified node(s). Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selectors are based on labels, so all the pods with the specified label will be evacuated. Procedure Mark the nodes unschedulable before performing the pod evacuation. Mark the node as unschedulable: USD oc adm cordon <node1> Example output node/<node1> cordoned Check that the node status is Ready,SchedulingDisabled : USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.24.0 Evacuate the pods using one of the following methods: Evacuate all or selected pods on one or more nodes: USD oc adm drain <node1> <node2> [--pod-selector=<pod_selector>] Force the deletion of bare pods using the --force option. When set to true , deletion continues even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set: USD oc adm drain <node1> <node2> --force=true Set a period of time in seconds for each pod to terminate gracefully, use --grace-period . If negative, the default value specified in the pod will be used: USD oc adm drain <node1> <node2> --grace-period=-1 Ignore pods managed by daemon sets using the --ignore-daemonsets flag set to true : USD oc adm drain <node1> <node2> --ignore-daemonsets=true Set the length of time to wait before giving up using the --timeout flag. A value of 0 sets an infinite length of time: USD oc adm drain <node1> <node2> --timeout=5s Delete pods even if there are pods using emptyDir volumes by setting the --delete-emptydir-data flag to true . Local data is deleted when the node is drained: USD oc adm drain <node1> <node2> --delete-emptydir-data=true List objects that will be migrated without actually performing the evacuation, using the --dry-run option set to true : USD oc adm drain <node1> <node2> --dry-run=true Instead of specifying specific node names (for example, <node1> <node2> ), you can use the --selector=<node_selector> option to evacuate pods on selected nodes. Mark the node as schedulable when done. USD oc adm uncordon <node1> 6.2.2. Understanding how to update labels on nodes You can update any label on a node. Node labels are not persisted after a node is deleted even if the node is backed up by a Machine. Note Any change to a MachineSet object is not applied to existing machines owned by the machine set. For example, labels edited or added to an existing MachineSet object are not propagated to existing machines and nodes associated with the machine set. The following command adds or updates labels on a node: USD oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n> For example: USD oc label nodes webconsole-7f7f6 unhealthy=true Tip You can alternatively apply the following YAML to apply the label: kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #... The following command updates all pods in the namespace: USD oc label pods --all <key_1>=<value_1> For example: USD oc label pods --all status=unhealthy 6.2.3. Understanding how to mark nodes as unschedulable or schedulable By default, healthy nodes with a Ready status are marked as schedulable, which means that you can place new pods on the node. Manually marking a node as unschedulable blocks any new pods from being scheduled on the node. Existing pods on the node are not affected. The following command marks a node or nodes as unschedulable: Example output USD oc adm cordon <node> For example: USD oc adm cordon node1.example.com Example output node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled The following command marks a currently unschedulable node or nodes as schedulable: USD oc adm uncordon <node1> Alternatively, instead of specifying specific node names (for example, <node> ), you can use the --selector=<node_selector> option to mark selected nodes as schedulable or unschedulable. 6.2.4. Deleting nodes 6.2.4.1. Deleting nodes from a cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure To delete a node from the OpenShift Container Platform cluster, edit the appropriate MachineSet object: Note If you are running cluster on bare metal, you cannot delete a node by editing MachineSet objects. Machine sets are only available when a cluster is integrated with a cloud provider. Instead you must unschedule and drain the node before manually deleting it. View the machine sets that are in the cluster: USD oc get machinesets -n openshift-machine-api The machine sets are listed in the form of <clusterid>-worker-<aws-region-az>. Scale the machine set: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 #... Additional resources For more information on scaling your cluster using a MachineSet, see Manually scaling a MachineSet . 6.2.4.2. Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps: Mark the node as unschedulable: USD oc adm cordon <node_name> Drain all pods on the node: USD oc adm drain <node_name> --force=true This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed. Delete the node from the cluster: USD oc delete node <node_name> Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node . If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster. 6.3. Managing nodes OpenShift Container Platform uses a KubeletConfig custom resource (CR) to manage the configuration of nodes. By creating an instance of a KubeletConfig object, a managed machine config is created to override setting on the node. Note Logging in to remote machines for the purpose of changing their configuration is not supported. 6.3.1. Modifying nodes To make configuration changes to a cluster, or machine pool, you must create a custom resource definition (CRD), or kubeletConfig object. OpenShift Container Platform uses the Machine Config Controller to watch for changes introduced through the CRD to apply the changes to the cluster. Note Because the fields in a kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the validation of those fields is handled directly by the kubelet itself. Please refer to the relevant Kubernetes documentation for the valid values for these fields. Invalid values in the kubeletConfig object can render cluster nodes unusable. Procedure Obtain the label associated with the static CRD, Machine Config Pool, for the type of node you want to configure. Perform one of the following steps: Check current labels of the desired machine config pool. For example: USD oc get machineconfigpool --show-labels Example output NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False Add a custom label to the desired machine config pool. For example: USD oc label machineconfigpool worker custom-kubelet=enabled Create a kubeletconfig custom resource (CR) for your configuration change. For example: Sample configuration for a custom-config CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #... 1 Assign a name to CR. 2 Specify the label to apply the configuration change, this is the label you added to the machine config pool. 3 Specify the new value(s) you want to change. Create the CR object. USD oc create -f <file-name> For example: USD oc create -f master-kube-config.yaml Most Kubelet Configuration options can be set by the user. The following options are not allowed to be overwritten: CgroupDriver ClusterDNS ClusterDomain StaticPodPath Note If a single node contains more than 50 images, pod scheduling might be imbalanced across nodes. This is because the list of images on a node is shortened to 50 by default. You can disable the image limit by editing the KubeletConfig object and setting the value of nodeStatusMaxImages to -1 . 6.3.2. Configuring control plane nodes as schedulable You can configure control plane nodes to be schedulable, meaning that new pods are allowed for placement on the master nodes. By default, control plane nodes are not schedulable. You can set the masters to be schedulable, but must retain the worker nodes. Note You can deploy OpenShift Container Platform with no worker nodes on a bare metal cluster. In this case, the control plane nodes are marked schedulable by default. You can allow or disallow control plane nodes to be schedulable by configuring the mastersSchedulable field. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Procedure Edit the schedulers.config.openshift.io resource. USD oc edit schedulers.config.openshift.io cluster Configure the mastersSchedulable field. apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: "2019-09-10T03:04:05Z" generation: 1 name: cluster resourceVersion: "433" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #... 1 Set to true to allow control plane nodes to be schedulable, or false to disallow control plane nodes to be schedulable. Save the file to apply the changes. 6.3.3. Setting SELinux booleans OpenShift Container Platform allows you to enable and disable an SELinux boolean on a Red Hat Enterprise Linux CoreOS (RHCOS) node. The following procedure explains how to modify SELinux booleans on nodes using the Machine Config Operator (MCO). This procedure uses container_manage_cgroup as the example boolean. You can modify this value to whichever boolean you need. Prerequisites You have installed the OpenShift CLI (oc). Procedure Create a new YAML file with a MachineConfig object, displayed in the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #... Create the new MachineConfig object by running the following command: USD oc create -f 99-worker-setsebool.yaml Note Applying any changes to the MachineConfig object causes all affected nodes to gracefully reboot after the change is applied. 6.3.4. Adding kernel arguments to nodes In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set. Warning Improper use of kernel arguments can result in your systems becoming unbootable. Examples of kernel arguments you could set include: enforcing=0 : Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging. nosmt : Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider nosmt in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance. systemd.unified_cgroup_hierarchy : Enables Linux control group version 2 (cgroup v2). cgroup v2 is the version of the kernel control group and offers multiple improvements. Important The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. See Kernel.org kernel parameters for a list and descriptions of kernel arguments. In the following procedure, you create a MachineConfig object that identifies: A set of machines to which you want to add the kernel argument. In this case, machines with a worker role. Kernel arguments that are appended to the end of the existing kernel arguments. A label that indicates where in the list of machine configs the change is applied. Prerequisites Have administrative privilege to a working OpenShift Container Platform cluster. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to determine how to label your machine config: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Create a MachineConfig object file that identifies the kernel argument (for example, 05-worker-kernelarg-selinuxpermissive.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3 1 Applies the new kernel argument only to worker nodes. 2 Named to identify where it fits among the machine configs (05) and what it does (adds a kernel argument to configure SELinux permissive mode). 3 Identifies the exact kernel argument as enforcing=0 . Create the new machine config: USD oc create -f 05-worker-kernelarg-selinuxpermissive.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.23.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.23.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.23.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.23.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.23.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.23.0 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16... coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit You should see the enforcing=0 argument added to the other kernel arguments. 6.3.5. Enabling swap memory use on nodes Important Enabling swap memory use on nodes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can enable swap memory use for OpenShift Container Platform workloads on a per-node basis. Warning Enabling swap memory can negatively impact workload performance and out-of-resource handling. Do not enable swap memory on control plane nodes. To enable swap memory, create a kubeletconfig custom resource (CR) to set the swapbehavior parameter. You can set limited or unlimited swap memory: Limited: Use the LimitedSwap value to limit how much swap memory workloads can use. Any workloads on the node that are not managed by OpenShift Container Platform can still use swap memory. The LimitedSwap behavior depends on whether the node is running with Linux control groups version 1 (cgroups v1) or version 2 (cgroups v2) : cgroups v1: OpenShift Container Platform workloads can use any combination of memory and swap, up to the pod's memory limit, if set. cgroups v2: OpenShift Container Platform workloads cannot use swap memory. Unlimited: Use the UnlimitedSwap value to allow workloads to use as much swap memory as they request, up to the system limit. Because the kubelet will not start in the presence of swap memory without this configuration, you must enable swap memory in OpenShift Container Platform before enabling swap memory on the nodes. If there is no swap memory present on a node, enabling swap memory in OpenShift Container Platform has no effect. Prerequisites You have a running OpenShift Container Platform cluster that uses version 4.10 or later. You are logged in to the cluster as a user with administrative privileges. You have enabled the TechPreviewNoUpgrade feature set on the cluster (see Nodes Working with clusters Enabling features using feature gates ). Note Enabling the TechPreviewNoUpgrade feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters. If cgroups v2 is enabled on a node, you must enable swap accounting on the node, by setting the swapaccount=1 kernel argument. Procedure Apply a custom label to the machine config pool where you want to allow swap memory. USD oc label machineconfigpool worker kubelet-swap=enabled Create a custom resource (CR) to enable and configure swap settings. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #... 1 Set to false to enable swap memory use on the associated nodes. Set to true to disable swap memory use. 2 Specify the swap memory behavior. If unspecified, the default is LimitedSwap . Enable swap memory on the machines. 6.3.6. Migrating control plane nodes from one RHOSP host to another You can run a script that moves a control plane node from one Red Hat OpenStack Platform (RHOSP) node to another. Prerequisites The environment variable OS_CLOUD refers to a clouds entry that has administrative credentials in a clouds.yaml file. The environment variable KUBECONFIG refers to a configuration that contains administrative OpenShift Container Platform credentials. Procedure From a command line, run the following script: #!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo "Usage: 'USD0 node_name'" exit 64 fi # Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo "The script needs OpenStack admin credentials. Exiting"; exit 77; } # Check for admin OpenShift credentials oc adm top node >/dev/null || { >&2 echo "The script needs OpenShift admin credentials. Exiting"; exit 77; } set -x declare -r node_name="USD1" declare server_id server_id="USD(openstack server list --all-projects -f value -c ID -c Name | grep "USDnode_name" | cut -d' ' -f1)" readonly server_id # Drain the node oc adm cordon "USDnode_name" oc adm drain "USDnode_name" --delete-emptydir-data --ignore-daemonsets --force # Power off the server oc debug "node/USD{node_name}" -- chroot /host shutdown -h 1 # Verify the server is shut off until openstack server show "USDserver_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done # Migrate the node openstack server migrate --wait "USDserver_id" # Resize the VM openstack server resize confirm "USDserver_id" # Wait for the resize confirm to finish until openstack server show "USDserver_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done # Restart the VM openstack server start "USDserver_id" # Wait for the node to show up as Ready: until oc get node "USDnode_name" | grep -q "^USD{node_name}[[:space:]]\+Ready"; do sleep 5; done # Uncordon the node oc adm uncordon "USDnode_name" # Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type "Degraded" }}{{ if ne .status "False" }}DEGRADED{{ end }}{{ else if eq .type "Progressing"}}{{ if ne .status "False" }}PROGRESSING{{ end }}{{ else if eq .type "Available"}}{{ if ne .status "True" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\(DEGRADED\|PROGRESSING\|NOTAVAILABLE\)'; do sleep 5; done If the script completes, the control plane machine is migrated to a new RHOSP node. 6.4. Managing the maximum number of pods per node In OpenShift Container Platform, you can configure the number of pods that can run on a node based on the number of processor cores on the node, a hard limit or both. If you use both options, the lower of the two limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization by OpenShift Container Platform. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the IP address pool. Resource overcommitting, leading to poor user application performance. Note A pod that is holding a single container actually uses two containers. The second container sets up networking prior to the actual container starting. As a result, a node running 10 pods actually has 20 containers running. The podsPerCore parameter limits the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node is 40. The maxPods parameter limits the number of pods the node can run to a fixed value, regardless of the properties of the node. 6.4.1. Configuring the maximum number of pods per node Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . If you use both options, the lower of the two limits the number of pods on a node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a max-pods CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the number of pods the node can run based on the number of processor cores on the node. 4 Specify the number of pods the node can run to a fixed value, regardless of the properties of the node. Note Setting podsPerCore to 0 disables this limit. In the above example, the default value for podsPerCore is 10 and the default value for maxPods is 250 . This means that unless the node has 25 cores or more, by default, podsPerCore will be the limiting factor. Run the following command to create the CR: USD oc create -f <file_name>.yaml Verification List the MachineConfigPool CRDs to see if the change is applied. The UPDATING column reports True if the change is picked up by the Machine Config Controller: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False Once the change is complete, the UPDATED column reports True . USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False 6.5. Using the Node Tuning Operator Learn about the Node Tuning Operator and how you can use it to manage node-level tuning by orchestrating the tuned daemon. The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. 6.5.1. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run: USD oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality might be deprecated in future versions of the Node Tuning Operator. 6.5.2. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of TuneD profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists TuneD profiles and their names. profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD # ... - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. 6.5.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: recommend: - profile: "openshift-control-plane" priority: 30 match: - label: "node-role.kubernetes.io/master" - label: "node-role.kubernetes.io/infra" - profile: "openshift-node" priority: 40 Starting with OpenShift Container Platform 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec command to view the contents of these profiles: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \; 6.5.4. Supported TuneD daemon plugins Excluding the [main] section, the following TuneD plugins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm bootloader There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported: script systemd Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Additional references Available TuneD Plugins Getting Started with TuneD 6.6. Remediating nodes with the Poison Pill Operator You can use the Poison Pill Operator to automatically reboot unhealthy nodes. This remediation strategy minimizes downtime for stateful applications and ReadWriteOnce (RWO) volumes, and restores compute capacity if transient failures occur. 6.6.1. About the Poison Pill Operator The Poison Pill Operator runs on the cluster nodes and reboots nodes that are identified as unhealthy. The Operator uses the MachineHealthCheck controller to detect the health of a node in the cluster. When a node is identified as unhealthy, the MachineHealthCheck resource creates the PoisonPillRemediation custom resource (CR), which triggers the Poison Pill Operator. The Poison Pill Operator minimizes downtime for stateful applications and restores compute capacity if transient failures occur. You can use this Operator regardless of the management interface, such as IPMI or an API to provision a node, and regardless of the cluster installation type, such as installer-provisioned infrastructure or user-provisioned infrastructure. 6.6.1.1. Understanding the Poison Pill Operator configuration The Poison Pill Operator creates the PoisonPillConfig CR with the name poison-pill-config in the Poison Pill Operator's namespace. You can edit this CR. However, you cannot create a new CR for the Poison Pill Operator. A change in the PoisonPillConfig CR re-creates the Poison Pill daemon set. The PoisonPillConfig CR resembles the following YAML file: apiVersion: poison-pill.medik8s.io/v1alpha1 kind: PoisonPillConfig metadata: name: poison-pill-config namespace: openshift-operators spec: safeTimeToAssumeNodeRebootedSeconds: 180 1 watchdogFilePath: /test/watchdog1 2 isSoftwareRebootEnabled: true 3 apiServerTimeout: 15s 4 apiCheckInterval: 5s 5 maxApiErrorThreshold: 3 6 peerApiServerTimeout: 5s 7 peerDialTimeout: 5s 8 peerRequestTimeout: 5s 9 peerUpdateInterval: 15m 10 1 Specify the timeout duration for the surviving peer, after which the Operator can assume that an unhealthy node has been rebooted. The Operator automatically calculates the lower limit for this value. However, if different nodes have different watchdog timeouts, you must change this value to a higher value. 2 Specify the file path of the watchdog device in the nodes. If you enter an incorrect path to the watchdog device, the Poison Pill Operator automatically detects the softdog device path. If a watchdog device is unavailable, the PoisonPillConfig CR uses a software reboot. 3 Specify if you want to enable software reboot of the unhealthy nodes. By default, the value of isSoftwareRebootEnabled is set to true . To disable the software reboot, set the parameter value to false . 4 Specify the timeout duration to check connectivity with each API server. When this duration elapses, the Operator starts remediation. 5 Specify the frequency to check connectivity with each API server. 6 Specify a threshold value. After reaching this threshold, the node starts contacting its peers. 7 Specify the timeout duration for the peer to connect the API server. 8 Specify the timeout duration for establishing connection with the peer. 9 Specify the timeout duration to get a response from the peer. 10 Specify the frequency to update peer information, such as IP address. 6.6.1.2. Understanding the Poison Pill Remediation Template configuration The Poison Pill Operator also creates the PoisonPillRemediationTemplate CR with the name poison-pill-default-template in the Poison Pill Operator's namespace. This CR defines the remediation strategy for the nodes. The default remediation strategy is NodeDeletion that removes the node object. In OpenShift Container Platform 4.10, the Poison Pill Operator introduces a new remediation strategy called ResourceDeletion . The ResourceDeletion remediation strategy removes the pods and associated volume attachments on the node rather than the node object. This strategy helps to recover workloads faster. The PoisonPillRemediationTemplate CR resembles the following YAML file: apiVersion: poison-pill.medik8s.io/v1alpha1 kind: PoisonPillRemediationTemplate metadata: creationTimestamp: "2022-03-02T08:02:40Z" generation: 1 name: poison-pill-default-template namespace: openshift-operators resourceVersion: "596469" uid: 5d29e437-c485-48fa-ba9e-0354649afd31 spec: template: spec: remediationStrategy: NodeDeletion 1 1 Specifies the remediation strategy. The default remediation strategy is NodeDeletion . 6.6.1.3. About watchdog devices Watchdog devices can be any of the following: Independently powered hardware devices Hardware devices that share power with the hosts they control Virtual devices implemented in software, or softdog Hardware watchdog and softdog devices have electronic or software timers, respectively. These watchdog devices are used to ensure that the machine enters a safe state when an error condition is detected. The cluster is required to repeatedly reset the watchdog timer to prove that it is in a healthy state. This timer might elapse due to fault conditions, such as deadlocks, CPU starvation, and loss of network or disk access. If the timer expires, the watchdog device assumes that a fault has occurred and the device triggers a forced reset of the node. Hardware watchdog devices are more reliable than softdog devices. 6.6.1.3.1. Understanding Poison Pill Operator behavior with watchdog devices The Poison Pill Operator determines the remediation strategy based on the watchdog devices that are present. If a hardware watchdog device is configured and available, the Operator uses it for remediation. If a hardware watchdog device is not configured, the Operator enables and uses a softdog device for remediation. If neither watchdog devices are supported, either by the system or by the configuration, the Operator remediates nodes by using software reboot. Additional resources Configuring a watchdog 6.6.2. Installing the Poison Pill Operator by using the web console You can use the OpenShift Container Platform web console to install the Poison Pill Operator. Prerequisites Log in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Poison Pill Operator from the list of available Operators, and then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator is installed to the openshift-operators namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-operators namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the poison-pill-controller-manager project that are reporting issues. 6.6.3. Installing the Poison Pill Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the Poison Pill Operator. You can install the Poison Pill Operator in your own namespace or in the openshift-operators namespace. To install the Operator in your own namespace, follow the steps in the procedure. To install the Operator in the openshift-operators namespace, skip to step 3 of the procedure because the steps to create a new Namespace custom resource (CR) and an OperatorGroup CR are not required. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a Namespace custom resource (CR) for the Poison Pill Operator: Define the Namespace CR and save the YAML file, for example, poison-pill-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: poison-pill To create the Namespace CR, run the following command: USD oc create -f poison-pill-namespace.yaml Create an OperatorGroup CR: Define the OperatorGroup CR and save the YAML file, for example, poison-pill-operator-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: poison-pill-manager namespace: poison-pill To create the OperatorGroup CR, run the following command: USD oc create -f poison-pill-operator-group.yaml Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, poison-pill-subscription.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: poison-pill-manager namespace: poison-pill 1 spec: channel: stable installPlanApproval: Manual 2 name: poison-pill-manager source: redhat-operators sourceNamespace: openshift-marketplace package: poison-pill-manager 1 Specify the Namespace where you want to install the Poison Pill Operator. To install the Poison Pill Operator in the openshift-operators namespace, specify openshift-operators in the Subscription CR. 2 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. To create the Subscription CR, run the following command: USD oc create -f poison-pill-subscription.yaml Verification Verify that the installation succeeded by inspecting the CSV resource: USD oc get csv -n poison-pill Example output NAME DISPLAY VERSION REPLACES PHASE poison-pill.v.0.2.0 Poison Pill Operator 0.2.0 Succeeded Verify that the Poison Pill Operator is up and running: USD oc get deploy -n poison-pill Example output NAME READY UP-TO-DATE AVAILABLE AGE poison-pill-controller-manager 1/1 1 1 10d Verify that the Poison Pill Operator created the PoisonPillConfig CR: USD oc get PoisonPillConfig -n poison-pill Example output NAME AGE poison-pill-config 10d Verify that each poison pill pod is scheduled and running on each worker node: USD oc get daemonset -n poison-pill Example output NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE poison-pill-ds 2 2 2 2 2 <none> 10d Note This command is unsupported for the control plane nodes. 6.6.4. Configuring machine health checks to use the Poison Pill Operator Use the following procedure to configure the machine health checks to use the Poison Pill Operator as a remediation provider. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a PoisonPillRemediationTemplate CR: Define the PoisonPillRemediationTemplate CR: apiVersion: poison-pill.medik8s.io/v1alpha1 kind: PoisonPillRemediationTemplate metadata: namespace: openshift-machine-api name: poisonpillremediationtemplate-sample spec: template: spec: {} To create the PoisonPillRemediationTemplate CR, run the following command: USD oc create -f <ppr-name>.yaml Create or update the MachineHealthCheck CR to point to the PoisonPillRemediationTemplate CR: Define or update the MachineHealthCheck CR: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: machine-health-check namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: "worker" machine.openshift.io/cluster-api-machine-type: "worker" unhealthyConditions: - type: "Ready" timeout: "300s" status: "False" - type: "Ready" timeout: "300s" status: "Unknown" maxUnhealthy: "40%" nodeStartupTimeout: "10m" remediationTemplate: 1 kind: PoisonPillRemediationTemplate apiVersion: poison-pill.medik8s.io/v1alpha1 name: poisonpillremediationtemplate-sample 1 Specify the details for the remediation template. To create a MachineHealthCheck CR, run the following command: USD oc create -f <file-name>.yaml To update a MachineHealthCheck CR, run the following command: USD oc apply -f <file-name>.yaml 6.6.5. Troubleshooting the Poison Pill Operator 6.6.5.1. General troubleshooting Issue You want to troubleshoot issues with the Poison Pill Operator. Resolution Check the Operator logs. 6.6.5.2. Checking the daemon set Issue The Poison Pill Operator is installed but the daemon set is not available. Resolution Check the Operator logs for errors or warnings. 6.6.5.3. Unsuccessful remediation Issue An unhealthy node was not remediated. Resolution Verify that the PoisonPillRemediation CR was created by running the following command: USD oc get ppr -A If the MachineHealthCheck controller did not create the PoisonPillRemediation CR when the node turned unhealthy, check the logs of the MachineHealthCheck controller. Additionally, ensure that the MachineHealthCheck CR includes the required specification to use the remediation template. If the PoisonPillRemediation CR was created, ensure that its name matches the unhealthy node or the machine object. 6.6.5.4. Daemon set and other Poison Pill Operator resources exist even after uninstalling the Poison Pill Operator Issue The Poison Pill Operator resources, such as the daemon set, configuration CR, and the remediation template CR, exist even after after uninstalling the Operator. Resolution To remove the Poison Pill Operator resources, delete the resources by running the following commands for each resource type: USD oc delete ds <poison-pill-ds> -n <namespace> USD oc delete ppc <poison-pill-config> -n <namespace> USD oc delete pprt <poison-pill-remediation-template> -n <namespace> 6.6.6. Gathering data about the Poison Pill Operator To collect debugging information about the Poison Pill Operator, use the must-gather tool. For information about the must-gather image for the Poison Pill Operator, see Gathering data about specific features . 6.6.7. Additional resources The Poison Pill Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . Deleting Operators from a cluster 6.7. Deploying node health checks by using the Node Health Check Operator Use the Node Health Check Operator to deploy the NodeHealthCheck controller. The controller identifies unhealthy nodes and uses the Poison Pill Operator to remediate the unhealthy nodes. Additional resources Remediating nodes with the Poison Pill Operator Important Node Health Check Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.7.1. About the Node Health Check Operator The Node Health Check Operator deploys the NodeHealthCheck controller to detect the health of a node in the cluster. The NodeHealthCheck controller creates the NodeHealthCheck custom resource (CR), which defines a set of criteria and thresholds to determine the node's health. The Node Health Check Operator also installs the Poison Pill Operator as a default remediation provider. When the Node Health Check Operator detects an unhealthy node, it creates a remediation CR that triggers the remediation provider. For example, the controller creates the PoisonPillRemediation CR, which triggers the Poison Pill Operator to remediate the unhealthy node. The NodeHealthCheck CR resembles the following YAML file: apiVersion: remediation.medik8s.io/v1alpha1 kind: NodeHealthCheck metadata: name: nodehealthcheck-sample spec: minHealthy: 51% 1 pauseRequests: 2 - <pause-test-cluster> remediationTemplate: 3 apiVersion: poison-pill.medik8s.io/v1alpha1 name: group-x namespace: openshift-operators kind: PoisonPillRemediationTemplate selector: 4 matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists unhealthyConditions: 5 - type: Ready status: "False" duration: 300s 6 - type: Ready status: Unknown duration: 300s 7 1 Specifies the amount of healthy nodes(in percentage or number) required for a remediation provider to concurrently remediate nodes in the targeted pool. If the number of healthy nodes equals to or exceeds the limit set by minHealthy , remediation occurs. The default value is 51%. 2 Prevents any new remediation from starting, while allowing any ongoing remediations to persist. The default value is empty. However, you can enter an array of strings that identify the cause of pausing the remediation. For example, pause-test-cluster . Note During the upgrade process, nodes in the cluster might become temporarily unavailable and get identified as unhealthy. In the case of worker nodes, when the Operator detects that the cluster is upgrading, it stops remediating new unhealthy nodes to prevent such nodes from rebooting. 3 Specifies a remediation template from the remediation provider. For example, from the Poison Pill Operator. 4 Specifies a selector that matches labels or expressions that you want to check. The default value is empty, which selects all nodes. 5 Specifies a list of the conditions that determine whether a node is considered unhealthy. 6 7 Specifies the timeout duration for a node condition. If a condition is met for the duration of the timeout, the node will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy node. 6.7.1.1. Understanding the Node Health Check Operator workflow When a node is identified as unhealthy, the Node Health Check Operator checks how many other nodes are unhealthy. If the number of healthy nodes exceeds the amount that is specified in the minHealthy field of the NodeHealthCheck CR, the controller creates a remediation CR from the details that are provided in the external remediation template by the remediation provider. After remediation, the kubelet updates the node's health status. When the node turns healthy, the controller deletes the external remediation template. 6.7.1.2. About how node health checks prevent conflicts with machine health checks When both, node health checks and machine health checks are deployed, the node health check avoids conflict with the machine health check. Note OpenShift Container Platform deploys machine-api-termination-handler as the default MachineHealthCheck resource. The following list summarizes the system behavior when node health checks and machine health checks are deployed: If only the default machine health check exists, the node health check continues to identify unhealthy nodes. However, the node health check ignores unhealthy nodes in a Terminating state. The default machine health check handles the unhealthy nodes with a Terminating state. Example log message INFO MHCChecker ignoring unhealthy Node, it is terminating and will be handled by MHC {"NodeName": "node-1.example.com"} If the default machine health check is modified (for example, the unhealthyConditions is Ready ), or if additional machine health checks are created, the node health check is disabled. Example log message When, again, only the default machine health check exists, the node health check is re-enabled. Example log message 6.7.2. Installing the Node Health Check Operator by using the web console You can use the OpenShift Container Platform web console to install the Node Health Check Operator. Prerequisites Log in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Node Health Check Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-operators namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-operators namespace and that its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-operators project that are reporting issues. 6.7.3. Installing the Node Health Check Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the Node Health Check Operator. To install the Operator in your own namespace, follow the steps in the procedure. To install the Operator in the openshift-operators namespace, skip to step 3 of the procedure because the steps to create a new Namespace custom resource (CR) and an OperatorGroup CR are not required. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a Namespace custom resource (CR) for the Node Health Check Operator: Define the Namespace CR and save the YAML file, for example, node-health-check-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: node-health-check To create the Namespace CR, run the following command: USD oc create -f node-health-check-namespace.yaml Create an OperatorGroup CR: Define the OperatorGroup CR and save the YAML file, for example, node-health-check-operator-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: node-health-check-operator namespace: node-health-check To create the OperatorGroup CR, run the following command: USD oc create -f node-health-check-operator-group.yaml Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, node-health-check-subscription.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: node-health-check-operator namespace: node-health-check 1 spec: channel: candidate 2 installPlanApproval: Manual 3 name: node-healthcheck-operator source: redhat-operators sourceNamespace: openshift-marketplace package: node-healthcheck-operator 1 Specify the Namespace where you want to install the Node Health Check Operator. To install the Node Health Check Operator in the openshift-operators namespace, specify openshift-operators in the Subscription CR. 2 Specify the channel name for your subscription. To upgrade to the latest version of the Node Health Check Operator, you must manually change the channel name for your subscription from alpha to candidate . 3 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. To create the Subscription CR, run the following command: USD oc create -f node-health-check-subscription.yaml Verification Verify that the installation succeeded by inspecting the CSV resource: USD oc get csv -n openshift-operators Example output NAME DISPLAY VERSION REPLACES PHASE node-healthcheck-operator.v0.2.0. Node Health Check Operator 0.2.0 Succeeded Verify that the Node Health Check Operator is up and running: USD oc get deploy -n openshift-operators Example output NAME READY UP-TO-DATE AVAILABLE AGE node-health-check-operator-controller-manager 1/1 1 1 10d 6.7.4. Gathering data about the Node Health Check Operator To collect debugging information about the Node Health Check Operator, use the must-gather tool. For information about the must-gather image for the Node Health Check Operator, see Gathering data about specific features . 6.7.5. Additional resources Changing the update channel for an Operator The Node Health Check Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . 6.8. Using the Node Maintenance Operator to place nodes in maintenance mode You can use the Node Maintenance Operator to place nodes in maintenance mode. This is a standalone version of the Node Maintenance Operator that is independent of OpenShift Virtualization installation. Note If you have installed OpenShift Virtualization, you must use the Node Maintenance Operator that is bundled with it. 6.8.1. About the Node Maintenance Operator You can place nodes into maintenance mode using the oc adm utility, or using NodeMaintenance custom resources (CRs). The Node Maintenance Operator watches for new or deleted NodeMaintenance CRs. When a new NodeMaintenance CR is detected, no new workloads are scheduled and the node is cordoned off from the rest of the cluster. All pods that can be evicted are evicted from the node. When a NodeMaintenance CR is deleted, the node that is referenced in the CR is made available for new workloads. Note Using a NodeMaintenance CR for node maintenance tasks achieves the same results as the oc adm cordon and oc adm drain commands using standard OpenShift Container Platform CR processing. 6.8.2. Maintaining bare-metal nodes When you deploy OpenShift Container Platform on bare-metal infrastructure, you must take additional considerations into account compared to deploying on cloud infrastructure. Unlike in cloud environments, where the cluster nodes are considered ephemeral, reprovisioning a bare-metal node requires significantly more time and effort for maintenance tasks. When a bare-metal node fails due to a kernel error or a NIC card hardware failure, workloads on the failed node need to be restarted on another node in the cluster while the problem node is repaired or replaced. Node maintenance mode allows cluster administrators to gracefully turn-off nodes, move workloads to other parts of the cluster, and ensure that workloads do not get interrupted. Detailed progress and node status details are provided during maintenance. 6.8.3. Installing the Node Maintenance Operator You can install the Node Maintenance Operator using the web console or the OpenShift CLI ( oc ). 6.8.3.1. Installing the Node Maintenance Operator by using the web console You can use the OpenShift Container Platform web console to install the Node Maintenance Operator. Prerequisites Log in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Node Maintenance Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-operators namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-operators namespace and that its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-operators project that are reporting issues. 6.8.3.2. Installing the Node Maintenance Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the Node Maintenance Operator. You can install the Node Maintenance Operator in your own namespace or in the openshift-operators namespace. To install the Operator in your own namespace, follow the steps in the procedure. To install the Operator in the openshift-operators namespace, skip to step 3 of the procedure because the steps to create a new Namespace custom resource (CR) and an OperatorGroup CR are not required. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a Namespace CR for the Node Maintenance Operator: Define the Namespace CR and save the YAML file, for example, node-maintenance-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: nmo-test To create the Namespace CR, run the following command: USD oc create -f node-maintenance-namespace.yaml Create an OperatorGroup CR: Define the OperatorGroup CR and save the YAML file, for example, node-maintenance-operator-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: node-maintenance-operator namespace: nmo-test To create the OperatorGroup CR, run the following command: USD oc create -f node-maintenance-operator-group.yaml Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, node-maintenance-subscription.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: node-maintenance-operator namespace: nmo-test 1 spec: channel: stable InstallPlaneApproval: Automatic name: node-maintenance-operator source: redhat-operators sourceNamespace: openshift-marketplace StartingCSV: node-maintenance-operator.v4.10.0 1 Specify the Namespace where you want to install the Node Maintenance Operator. Important To install the Node Maintenance Operator in the openshift-operators namespace, specify openshift-operators in the Subscription CR. To create the Subscription CR, run the following command: USD oc create -f node-maintenance-subscription.yaml Verification Verify that the installation succeeded by inspecting the CSV resource: USD oc get csv -n openshift-operators Example output NAME DISPLAY VERSION REPLACES PHASE node-maintenance-operator.v4.10 Node Maintenance Operator 4.10 Succeeded Verify that the Node Maintenance Operator is running: USD oc get deploy -n openshift-operators Example output NAME READY UP-TO-DATE AVAILABLE AGE node-maintenance-operator-controller-manager 1/1 1 1 10d The Node Maintenance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . 6.8.4. Setting a node to maintenance mode You can place a node into maintenance from the web console or in the CLI by using a NodeMaintenance CR. 6.8.4.1. Setting a node to maintenance mode by using the web console To set a node to maintenance mode, you can create a NodeMaintenance custom resource (CR) by using the web console. Prerequisites Log in as a user with cluster-admin privileges. Install the Node Maintenance Operator from the OperatorHub . Procedure From the Administrator perspective in the web console, navigate to Operators Installed Operators . Select the Node Maintenance Operator from the list of Operators. In the Node Maintenance tab, click Create NodeMaintenance . In the Create NodeMaintenance page, select the Form view or the YAML view to configure the NodeMaintenance CR. To apply the NodeMaintenance CR that you have configured, click Create . Verification In the Node Maintenance tab, inspect the Status column and verify that its status is Succeeded . 6.8.4.2. Setting a node to maintenance mode by using the CLI You can put a node into maintenance mode with a NodeMaintenance custom resource (CR). When you apply a NodeMaintenance CR, all allowed pods are evicted and the node is rendered unschedulable. Evicted pods are queued to be moved to another node in the cluster. Prerequisites Install the OpenShift Container Platform CLI oc . Log in to the cluster as a user with cluster-admin privileges. Procedure Create the following NodeMaintenance CR, and save the file as nodemaintenance-cr.yaml : apiVersion: nodemaintenance.medik8s.io/v1beta1 kind: NodeMaintenance metadata: name: nodemaintenance-cr 1 spec: nodeName: node-1.example.com 2 reason: "NIC replacement" 3 1 The name of the node maintenance CR. 2 The name of the node to be put into maintenance mode. 3 A plain text description of the reason for maintenance. Apply the node maintenance CR by running the following command: USD oc apply -f nodemaintenance-cr.yaml Check the progress of the maintenance task by running the following command, replacing <node-name> with the name of your node; for example, node-1.example.com : USD oc describe node node-1.example.com Example output Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeNotSchedulable 61m kubelet Node node-1.example.com status is now: NodeNotSchedulable 6.8.4.2.1. Checking status of current NodeMaintenance CR tasks You can check the status of current NodeMaintenance CR tasks. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Procedure Check the status of current node maintenance tasks, for example the NodeMaintenance CR or nm object, by running the following command: USD oc get nm -o yaml Example output apiVersion: v1 items: - apiVersion: nodemaintenance.medik8s.io/v1beta1 kind: NodeMaintenance metadata: ... spec: nodeName: node-1.example.com reason: Node maintenance status: evictionPods: 3 1 lastError: "Last failure message" 2 phase: Succeeded totalpods: 5 3 ... 1 The number of pods scheduled for eviction. 2 The latest eviction error, if any. 3 The total number of pods before the node entered maintenance mode. 6.8.5. Resuming a node from maintenance mode You can resume a node from maintenance mode from the CLI or by using a NodeMaintenance CR. Resuming a node brings it out of maintenance mode and makes it schedulable again. 6.8.5.1. Resuming a node from maintenance mode by using the web console To resume a node from maintenance mode, you can delete a NodeMaintenance custom resource (CR) by using the web console. Prerequisites Log in as a user with cluster-admin privileges. Install the Node Maintenance Operator from the OperatorHub . Procedure From the Administrator perspective in the web console, navigate to Operators Installed Operators . Select the Node Maintenance Operator from the list of Operators. In the Node Maintenance tab, select the NodeMaintenance CR that you want to delete. Click the Options menu at the end of the node and select Delete NodeMaintenance . Verification In the OpenShift Container Platform console, click Compute Nodes . Inspect the Status column of the node for which you deleted the NodeMaintenance CR and verify that its status is Ready . 6.8.5.2. Resuming a node from maintenance mode by using the CLI You can resume a node from maintenance mode that was initiated with a NodeMaintenance CR by deleting the NodeMaintenance CR. Prerequisites Install the OpenShift Container Platform CLI oc . Log in to the cluster as a user with cluster-admin privileges. Procedure When your node maintenance task is complete, delete the active NodeMaintenance CR: USD oc delete -f nodemaintenance-cr.yaml Example output nodemaintenance.nodemaintenance.medik8s.io "maintenance-example" deleted 6.8.6. Gathering data about the Node Maintenance Operator To collect debugging information about the Node Maintenance Operator, use the must-gather tool. For information about the must-gather image for the Node Maintenance Operator, see Gathering data about specific features . 6.8.7. Additional resources Gathering data about your cluster Understanding how to evacuate pods on nodes Understanding how to mark nodes as unschedulable or schedulable 6.9. Understanding node rebooting To reboot a node without causing an outage for applications running on the platform, it is important to first evacuate the pods. For pods that are made highly available by the routing tier, nothing else needs to be done. For other pods needing storage, typically databases, it is critical to ensure that they can remain in operation with one pod temporarily going offline. While implementing resiliency for stateful pods is different for each application, in all cases it is important to configure the scheduler to use node anti-affinity to ensure that the pods are properly spread across available nodes. Another challenge is how to handle nodes that are running critical infrastructure such as the router or the registry. The same node evacuation process applies, though it is important to understand certain edge cases. 6.9.1. About rebooting nodes running critical infrastructure When rebooting nodes that host critical OpenShift Container Platform infrastructure components, such as router pods, registry pods, and monitoring pods, ensure that there are at least three nodes available to run these components. The following scenario demonstrates how service interruptions can occur with applications running on OpenShift Container Platform when only two nodes are available: Node A is marked unschedulable and all pods are evacuated. The registry pod running on that node is now redeployed on node B. Node B is now running both registry pods. Node B is now marked unschedulable and is evacuated. The service exposing the two pod endpoints on node B loses all endpoints, for a brief period of time, until they are redeployed to node A. When using three nodes for infrastructure components, this process does not result in a service disruption. However, due to pod scheduling, the last node that is evacuated and brought back into rotation does not have a registry pod. One of the other nodes has two registry pods. To schedule the third registry pod on the last node, use pod anti-affinity to prevent the scheduler from locating two registry pods on the same node. Additional information For more information on pod anti-affinity, see Placing pods relative to other pods using affinity and anti-affinity rules . 6.9.2. Rebooting a node using pod anti-affinity Pod anti-affinity is slightly different than node anti-affinity. Node anti-affinity can be violated if there are no other suitable locations to deploy a pod. Pod anti-affinity can be set to either required or preferred. With this in place, if only two infrastructure nodes are available and one is rebooted, the container image registry pod is prevented from running on the other node. oc get pods reports the pod as unready until a suitable node is available. Once a node is available and all pods are back in ready state, the node can be restarted. Procedure To reboot a node using pod anti-affinity: Edit the node specification to configure pod anti-affinity: apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #... 1 Stanza to configure pod anti-affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with the highest weight is preferred. 4 Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label. 5 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . This example assumes the container image registry pod has a label of registry=default . Pod anti-affinity can use any Kubernetes match expression. Enable the MatchInterPodAffinity scheduler predicate in the scheduling policy file. Perform a graceful restart of the node. 6.9.3. Understanding how to reboot nodes running routers In most cases, a pod running an OpenShift Container Platform router exposes a host port. The PodFitsPorts scheduler predicate ensures that no router pods using the same port can run on the same node, and pod anti-affinity is achieved. If the routers are relying on IP failover for high availability, there is nothing else that is needed. For router pods relying on an external service such as AWS Elastic Load Balancing for high availability, it is that service's responsibility to react to router pod restarts. In rare cases, a router pod may not have a host port configured. In those cases, it is important to follow the recommended restart process for infrastructure nodes. 6.9.4. Rebooting a node gracefully Before rebooting a node, it is recommended to backup etcd data to avoid any data loss on the node. Note For single-node OpenShift clusters that require users to perform the oc login command rather than having the certificates in kubeconfig file to manage the cluster, the oc adm commands might not be available after cordoning and draining the node. This is because the openshift-oauth-apiserver pod is not running due to the cordon. You can use SSH to access the nodes as indicated in the following procedure. In a single-node OpenShift cluster, pods cannot be rescheduled when cordoning and draining. However, doing so gives the pods, especially your workload pods, time to properly stop and release associated resources. Procedure To perform a graceful restart of a node: Mark the node as unschedulable: USD oc adm cordon <node1> Drain the node to remove all the running pods: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force You might receive errors that pods associated with custom pod disruption budgets (PDB) cannot be evicted. Example error error when evicting pods/"rails-postgresql-example-1-72v2w" -n "rails" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. In this case, run the drain command again, adding the disable-eviction flag, which bypasses the PDB checks: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction Access the node in debug mode: USD oc debug node/<node1> Change your root directory to /host : USD chroot /host Restart the node: USD systemctl reboot In a moment, the node enters the NotReady state. Note With some single-node OpenShift clusters, the oc commands might not be available after you cordon and drain the node because the openshift-oauth-apiserver pod is not running. You can use SSH to connect to the node and perform the reboot. USD ssh core@<master-node>.<cluster_name>.<base_domain> USD sudo systemctl reboot After the reboot is complete, mark the node as schedulable by running the following command: USD oc adm uncordon <node1> Note With some single-node OpenShift clusters, the oc commands might not be available after you cordon and drain the node because the openshift-oauth-apiserver pod is not running. You can use SSH to connect to the node and uncordon it. USD ssh core@<target_node> USD sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig Verify that the node is ready: USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8 Additional information For information on etcd data backup, see Backing up etcd data . 6.10. Freeing node resources using garbage collection As an administrator, you can use OpenShift Container Platform to ensure that your nodes are running efficiently by freeing up resources through garbage collection. The OpenShift Container Platform node performs two types of garbage collection: Container garbage collection: Removes terminated containers. Image garbage collection: Removes images not referenced by any running pods. 6.10.1. Understanding how terminated containers are removed through garbage collection Container garbage collection removes terminated containers by using eviction thresholds. When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs . eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period. eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action. The following table lists the eviction thresholds: Table 6.2. Variables for configuring container garbage collection Node condition Eviction signal Description MemoryPressure memory.available The available memory on the node. DiskPressure nodefs.available nodefs.inodesFree imagefs.available imagefs.inodesFree The available disk space or inodes on the node root file system, nodefs , or image file system, imagefs . Note For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true and false . As a consequence, the scheduler could make poor scheduling decisions. To protect against this oscillation, use the eviction-pressure-transition-period flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false. 6.10.2. Understanding how images are removed through garbage collection Image garbage collection removes images that are not referenced by any running pods. OpenShift Container Platform determines which images to remove from a node based on the disk usage that is reported by cAdvisor . The policy for image garbage collection is based on two conditions: The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85 . The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80 . For image garbage collection, you can modify any of the following variables using a custom resource. Table 6.3. Variables for configuring image garbage collection Setting Description imageMinimumGCAge The minimum age for an unused image before the image is removed by garbage collection. The default is 2m . imageGCHighThresholdPercent The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85 . imageGCLowThresholdPercent The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80 . Two lists of images are retrieved in each garbage collector run: A list of images currently running in at least one pod. A list of images available on a host. As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the spins. All images are then sorted by the time stamp. Once the collection starts, the oldest images get deleted first until the stopping criterion is met. 6.10.3. Configuring garbage collection for containers and images As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a kubeletConfig object for each machine config pool. Note OpenShift Container Platform supports only one kubeletConfig object for each machine config pool. You can configure any combination of the following: Soft eviction for containers Hard eviction for containers Eviction for images Container garbage collection removes terminated containers. Image garbage collection removes images that are not referenced by any running pods. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Important If there is one file system, or if /var/lib/kubelet and /var/lib/containers/ are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction. Sample configuration for a container garbage collection CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: evictionSoft: 3 memory.available: "500Mi" 4 nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod: 5 memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard: 6 memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #... 1 Name for the object. 2 Specify the label from the machine config pool. 3 For container garbage collection: Type of eviction: evictionSoft or evictionHard . 4 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. 5 For container garbage collection: Grace periods for the soft eviction. This parameter does not apply to eviction-hard . 6 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. 7 For container garbage collection: The duration to wait before transitioning out of an eviction pressure condition. 8 For image garbage collection: The minimum age for an unused image before the image is removed by garbage collection. 9 For image garbage collection: The percent of disk usage (expressed as an integer) that triggers image garbage collection. 10 For image garbage collection: The percent of disk usage (expressed as an integer) that image garbage collection attempts to free. Run the following command to create the CR: USD oc create -f <file_name>.yaml For example: USD oc create -f gc-container.yaml Example output kubeletconfig.machineconfiguration.openshift.io/gc-container created Verification Verify that garbage collection is active by entering the following command. The Machine Config Pool you specified in the custom resource appears with UPDATING as 'true` until the change is fully implemented: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True 6.11. Allocating resources for nodes in an OpenShift Container Platform cluster To provide more reliable scheduling and minimize node resource overcommitment, reserve a portion of the CPU and memory resources for use by the underlying node components, such as kubelet and kube-proxy , and the remaining system components, such as sshd and NetworkManager . By specifying the resources to reserve, you provide the scheduler with more information about the remaining CPU and memory resources that a node has available for use by pods. You can allow OpenShift Container Platform to automatically determine the optimal system-reserved CPU and memory resources for your nodes or you can manually determine and set the best resources for your nodes. Important To manually set resource values, you must use a kubelet config CR. You cannot use a machine config CR. 6.11.1. Understanding how to allocate resources for nodes CPU and memory resources reserved for node components in OpenShift Container Platform are based on two node settings: Setting Description kube-reserved This setting is not used with OpenShift Container Platform. Add the CPU and memory resources that you planned to reserve to the system-reserved setting. system-reserved This setting identifies the resources to reserve for the node components and system components, such as CRI-O and Kubelet. The default settings depend on the OpenShift Container Platform and Machine Config Operator versions. Confirm the default systemReserved parameter on the machine-config-operator repository. If a flag is not set, the defaults are used. If none of the flags are set, the allocated resource is set to the node's capacity as it was before the introduction of allocatable resources. Note Any CPUs specifically reserved using the reservedSystemCPUs parameter are not available for allocation using kube-reserved or system-reserved . 6.11.1.1. How OpenShift Container Platform computes allocated resources An allocated amount of a resource is computed based on the following formula: Note The withholding of Hard-Eviction-Thresholds from Allocatable improves system reliability because the value for Allocatable is enforced for pods at the node level. If Allocatable is negative, it is set to 0 . Each node reports the system resources that are used by the container runtime and kubelet. To simplify configuring the system-reserved parameter, view the resource use for the node by using the node summary API. The node summary is available at /api/v1/nodes/<node>/proxy/stats/summary . 6.11.1.2. How nodes enforce resource constraints The node is able to limit the total amount of resources that pods can consume based on the configured allocatable value. This feature significantly improves the reliability of the node by preventing pods from using CPU and memory resources that are needed by system services such as the container runtime and node agent. To improve node reliability, administrators should reserve resources based on a target for resource use. The node enforces resource constraints by using a new cgroup hierarchy that enforces quality of service. All pods are launched in a dedicated cgroup hierarchy that is separate from system daemons. Administrators should treat system daemons similar to pods that have a guaranteed quality of service. System daemons can burst within their bounding control groups and this behavior must be managed as part of cluster deployments. Reserve CPU and memory resources for system daemons by specifying the amount of CPU and memory resources in system-reserved . Enforcing system-reserved limits can prevent critical system services from receiving CPU and memory resources. As a result, a critical system service can be ended by the out-of-memory killer. The recommendation is to enforce system-reserved only if you have profiled the nodes exhaustively to determine precise estimates and you are confident that critical system services can recover if any process in that group is ended by the out-of-memory killer. 6.11.1.3. Understanding Eviction Thresholds If a node is under memory pressure, it can impact the entire node and all pods running on the node. For example, a system daemon that uses more than its reserved amount of memory can trigger an out-of-memory event. To avoid or reduce the probability of system out-of-memory events, the node provides out-of-resource handling. You can reserve some memory using the --eviction-hard flag. The node attempts to evict pods whenever memory availability on the node drops below the absolute value or percentage. If system daemons do not exist on a node, pods are limited to the memory capacity - eviction-hard . For this reason, resources set aside as a buffer for eviction before reaching out of memory conditions are not available for pods. The following is an example to illustrate the impact of node allocatable for memory: Node capacity is 32Gi --system-reserved is 3Gi --eviction-hard is set to 100Mi . For this node, the effective node allocatable value is 28.9Gi . If the node and system components use all their reservation, the memory available for pods is 28.9Gi , and kubelet evicts pods when it exceeds this threshold. If you enforce node allocatable, 28.9Gi , with top-level cgroups, then pods can never exceed 28.9Gi . Evictions are not performed unless system daemons consume more than 3.1Gi of memory. If system daemons do not use up all their reservation, with the above example, pods would face memcg OOM kills from their bounding cgroup before node evictions kick in. To better enforce QoS under this situation, the node applies the hard eviction thresholds to the top-level cgroup for all pods to be Node Allocatable + Eviction Hard Thresholds . If system daemons do not use up all their reservation, the node will evict pods whenever they consume more than 28.9Gi of memory. If eviction does not occur in time, a pod will be OOM killed if pods consume 29Gi of memory. 6.11.1.4. How the scheduler determines resource availability The scheduler uses the value of node.Status.Allocatable instead of node.Status.Capacity to decide if a node will become a candidate for pod scheduling. By default, the node will report its machine capacity as fully schedulable by the cluster. 6.11.2. Automatically allocating resources for nodes OpenShift Container Platform can automatically determine the optimal system-reserved CPU and memory resources for nodes associated with a specific machine config pool and update the nodes with those values when the nodes start. By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi . To automatically determine and allocate the system-reserved resources on nodes, create a KubeletConfig custom resource (CR) to set the autoSizingReserved: true parameter. A script on each node calculates the optimal values for the respective reserved resources based on the installed CPU and memory capacity on each node. The script takes into account that increased capacity requires a corresponding increase in the reserved resources. Automatically determining the optimal system-reserved settings ensures that your cluster is running efficiently and prevents node failure due to resource starvation of system components, such as CRI-O and kubelet, without your needing to manually calculate and update the values. This feature is disabled by default. Prerequisites Obtain the label associated with the static MachineConfigPool object for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels . Tip If an appropriate label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change: Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 #... 1 Assign a name to CR. 2 Add the autoSizingReserved parameter set to true to allow OpenShift Container Platform to automatically determine and allocate the system-reserved resources on the nodes associated with the specified label. To disable automatic allocation on those nodes, set this parameter to false . 3 Specify the label from the machine config pool that you configured in the "Prerequisites" section. You can choose any desired labels for the machine config pool, such as custom-kubelet: small-pods , or the default label, pools.operator.machineconfiguration.openshift.io/worker: "" . The example enables automatic resource allocation on all worker nodes. OpenShift Container Platform drains the nodes, applies the kubelet config, and restarts the nodes. Create the CR by entering the following command: USD oc create -f <file_name>.yaml Verification Log in to a node you configured by entering the following command: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: # chroot /host View the /etc/node-sizing.env file: Example output SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08 The kubelet uses the system-reserved values in the /etc/node-sizing.env file. In the example, the worker nodes are allocated 0.08 CPU and 3 Gi of memory. It can take several minutes for the optimal values to appear. 6.11.3. Manually allocating resources for nodes OpenShift Container Platform supports the CPU and memory resource types for allocation. The ephemeral-resource resource type is also supported. For the cpu type, you specify the resource quantity in units of cores, such as 200m , 0.5 , or 1 . For memory and ephemeral-storage , you specify the resource quantity in units of bytes, such as 200Ki , 50Mi , or 5Gi . By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi . As an administrator, you can set these values by using a kubelet config custom resource (CR) through a set of <resource_type>=<resource_quantity> pairs (e.g., cpu=200m,memory=512Mi ). Important You must use a kubelet config CR to manually set resource values. You cannot use a machine config CR. For details on the recommended system-reserved values, refer to the recommended system-reserved values . Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the resources to reserve for the node components and system components. Run the following command to create the CR: USD oc create -f <file_name>.yaml 6.12. Allocating specific CPUs for nodes in a cluster When using the static CPU Manager policy , you can reserve specific CPUs for use by specific nodes in your cluster. For example, on a system with 24 CPUs, you could reserve CPUs numbered 0 - 3 for the control plane allowing the compute nodes to use CPUs 4 - 23. 6.12.1. Reserving CPUs for nodes To explicitly define a list of CPUs that are reserved for specific nodes, create a KubeletConfig custom resource (CR) to define the reservedSystemCPUs parameter. This list supersedes the CPUs that might be reserved using the systemReserved and kubeReserved parameters. Procedure Obtain the label associated with the machine config pool (MCP) for the type of node you want to configure: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #... 1 Get the MCP label. Create a YAML file for the KubeletConfig CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: "0,1,2,3" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 #... 1 Specify a name for the CR. 2 Specify the core IDs of the CPUs you want to reserve for the nodes associated with the MCP. 3 Specify the label from the MCP. Create the CR object: USD oc create -f <file_name>.yaml Additional resources For more information on the systemReserved and kubeReserved parameters, see Allocating resources for nodes in an OpenShift Container Platform cluster . 6.13. Enabling TLS security profiles for the kubelet You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by the kubelet when it is acting as an HTTP server. The kubelet uses its HTTP/GRPC server to communicate with the Kubernetes API server, which sends commands to pods, gathers logs, and run exec commands on pods through the kubelet. A TLS security profile defines the TLS ciphers that the Kubernetes API server must use when connecting with the kubelet to protect communication between the kubelet and the Kubernetes API server. Note By default, when the kubelet acts as a client with the Kubernetes API server, it automatically negotiates the TLS parameters with the API server. 6.13.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 6.4. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 6.13.2. Configuring the TLS security profile for the kubelet To configure a TLS security profile for the kubelet when it is acting as an HTTP server, create a KubeletConfig custom resource (CR) to specify a predefined or custom TLS security profile for specific nodes. If a TLS security profile is not configured, the default TLS security profile is Intermediate . Sample KubeletConfig CR that configures the Old TLS security profile on worker nodes apiVersion: config.openshift.io/v1 kind: KubeletConfig ... spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" #... You can see the ciphers and the minimum TLS version of the configured TLS security profile in the kubelet.conf file on a configured node. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Create a KubeletConfig CR to configure the TLS security profile: Sample KubeletConfig CR for a Custom profile apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 4 #... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. 4 Optional: Specify the machine config pool label for the nodes you want to apply the TLS security profile. Create the KubeletConfig object: USD oc create -f <filename> Depending on the number of worker nodes in the cluster, wait for the configured nodes to be rebooted one by one. Verification To verify that the profile is set, perform the following steps after the nodes are in the Ready state: Start a debug session for a configured node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host View the kubelet.conf file: sh-4.4# cat /etc/kubernetes/kubelet.conf Example output "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", #... "tlsCipherSuites": [ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256" ], "tlsMinVersion": "VersionTLS12", #... 6.14. Machine Config Daemon metrics The Machine Config Daemon is a part of the Machine Config Operator. It runs on every node in the cluster. The Machine Config Daemon manages configuration changes and updates on each of the nodes. 6.14.1. Machine Config Daemon metrics Beginning with OpenShift Container Platform 4.3, the Machine Config Daemon provides a set of metrics. These metrics can be accessed using the Prometheus Cluster Monitoring stack. The following table describes this set of metrics. Note Metrics marked with * in the *Name* and Description columns represent serious errors that might cause performance problems. Such problems might prevent updates and upgrades from proceeding. Note While some entries contain commands for getting specific logs, the most comprehensive set of logs is available using the oc adm must-gather command. Table 6.5. MCO metrics Name Format Description Notes mcd_host_os_and_version []string{"os", "version"} Shows the OS that MCD is running on, such as RHCOS or RHEL. In case of RHCOS, the version is provided. mcd_drain_err* {"drain_time", "err"} Logs errors received during failed drain. * While drains might need multiple tries to succeed, terminal failed drains prevent updates from proceeding. The drain_time metric, which shows how much time the drain took, might help with troubleshooting. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_pivot_err* []string{"err", "node", "pivot_target"} Logs errors encountered during pivot. * Pivot errors might prevent OS upgrades from proceeding. For further investigation, run this command to access the node and see all its logs: USD oc debug node/<node> - chroot /host journalctl -u pivot.service Alternatively, you can run this command to only see the logs from the machine-config-daemon container: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_state []string{"state", "reason"} State of Machine Config Daemon for the indicated node. Possible states are "Done", "Working", and "Degraded". In case of "Degraded", the reason is included. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_kubelet_state* []string{"err"} Logs kubelet health failures. * This is expected to be empty, with failure count of 0. If failure count exceeds 2, the error indicating threshold is exceeded. This indicates a possible issue with the health of the kubelet. For further investigation, run this command to access the node and see all its logs: USD oc debug node/<node> - chroot /host journalctl -u kubelet mcd_reboot_err* []string{"message", "err", "node"} Logs the failed reboots and the corresponding errors. * This is expected to be empty, which indicates a successful reboot. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_update_state []string{"config", "err"} Logs success or failure of configuration updates and the corresponding errors. The expected value is rendered-master/rendered-worker-XXXX . If the update fails, an error is present. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon Additional resources Monitoring overview Gathering data about your cluster 6.15. Creating infrastructure nodes Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 6.15.1. OpenShift Container Platform infrastructure components The following infrastructure workloads do not incur OpenShift Container Platform worker subscriptions: Kubernetes and OpenShift Container Platform control plane services that run on masters The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Service brokers Red Hat Quay Red Hat OpenShift Data Foundation Red Hat Advanced Cluster Manager Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. To create an infrastructure node, you can use a machine set , label the node , or use a machine config pool . 6.15.1.1. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1 # ... 1 This example node selector deploys pods on nodes in the us-east-1 region by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets
[ "oc get nodes", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.23.0 node1.example.com Ready worker 7h v1.23.0 node2.example.com Ready worker 7h v1.23.0", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.23.0 node1.example.com NotReady,SchedulingDisabled worker 7h v1.23.0 node2.example.com Ready worker 7h v1.23.0", "oc get nodes -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.23.0 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.23.0-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.23.0 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.23.0-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.23.0 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.23.0-30.rhaos4.10.gitf2f339d.el8-dev", "oc get node <node>", "oc get node node1.example.com", "NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.23.0", "oc describe node <node>", "oc describe node node1.example.com", "Name: node1.example.com 1 Roles: worker 2 Labels: beta.kubernetes.io/arch=amd64 3 beta.kubernetes.io/instance-type=m4.large beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=us-east-2 failure-domain.beta.kubernetes.io/zone=us-east-2a kubernetes.io/hostname=ip-10-0-140-16 node-role.kubernetes.io/worker= Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.16.0-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.23.0 Kube-Proxy Version: v1.23.0 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (13 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring grafana-78765ddcc7-hnjmm 100m (6%) 200m (13%) 100Mi (1%) 200Mi (2%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-t4dsn 100m (6%) 0 (0%) 300Mi (4%) 0 (0%) openshift-sdn sdn-g79hg 100m (6%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #", "oc describe node <node1> <node2>", "oc describe node ip-10-0-128-218.ec2.internal", "oc describe --selector=<node_selector>", "oc describe node --selector=kubernetes.io/os", "oc describe -l=<pod_selector>", "oc describe node -l node-role.kubernetes.io/worker", "oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%", "oc adm top node --selector=''", "oc adm cordon <node1>", "node/<node1> cordoned", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.24.0", "oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]", "oc adm drain <node1> <node2> --force=true", "oc adm drain <node1> <node2> --grace-period=-1", "oc adm drain <node1> <node2> --ignore-daemonsets=true", "oc adm drain <node1> <node2> --timeout=5s", "oc adm drain <node1> <node2> --delete-emptydir-data=true", "oc adm drain <node1> <node2> --dry-run=true", "oc adm uncordon <node1>", "oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>", "oc label nodes webconsole-7f7f6 unhealthy=true", "kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #", "oc label pods --all <key_1>=<value_1>", "oc label pods --all status=unhealthy", "oc adm cordon <node>", "oc adm cordon node1.example.com", "node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled", "oc adm uncordon <node1>", "oc get machinesets -n openshift-machine-api", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 #", "oc adm cordon <node_name>", "oc adm drain <node_name> --force=true", "oc delete node <node_name>", "oc get machineconfigpool --show-labels", "NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False", "oc label machineconfigpool worker custom-kubelet=enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #", "oc create -f <file-name>", "oc create -f master-kube-config.yaml", "oc edit schedulers.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: \"2019-09-10T03:04:05Z\" generation: 1 name: cluster resourceVersion: \"433\" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #", "oc create -f 99-worker-setsebool.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3", "oc create -f 05-worker-kernelarg-selinuxpermissive.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.23.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.23.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.23.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.23.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.23.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.23.0", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit", "oc label machineconfigpool worker kubelet-swap=enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #", "#!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo \"Usage: 'USD0 node_name'\" exit 64 fi Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo \"The script needs OpenStack admin credentials. Exiting\"; exit 77; } Check for admin OpenShift credentials adm top node >/dev/null || { >&2 echo \"The script needs OpenShift admin credentials. Exiting\"; exit 77; } set -x declare -r node_name=\"USD1\" declare server_id server_id=\"USD(openstack server list --all-projects -f value -c ID -c Name | grep \"USDnode_name\" | cut -d' ' -f1)\" readonly server_id Drain the node adm cordon \"USDnode_name\" adm drain \"USDnode_name\" --delete-emptydir-data --ignore-daemonsets --force Power off the server debug \"node/USD{node_name}\" -- chroot /host shutdown -h 1 Verify the server is shut off until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Migrate the node openstack server migrate --wait \"USDserver_id\" Resize the VM openstack server resize confirm \"USDserver_id\" Wait for the resize confirm to finish until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Restart the VM openstack server start \"USDserver_id\" Wait for the node to show up as Ready: until oc get node \"USDnode_name\" | grep -q \"^USD{node_name}[[:space:]]\\+Ready\"; do sleep 5; done Uncordon the node adm uncordon \"USDnode_name\" Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type \"Degraded\" }}{{ if ne .status \"False\" }}DEGRADED{{ end }}{{ else if eq .type \"Progressing\"}}{{ if ne .status \"False\" }}PROGRESSING{{ end }}{{ else if eq .type \"Available\"}}{{ if ne .status \"True\" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\\(DEGRADED\\|PROGRESSING\\|NOTAVAILABLE\\)'; do sleep 5; done", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #", "oc create -f <file_name>.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False", "oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: recommend: - profile: \"openshift-control-plane\" priority: 30 match: - label: \"node-role.kubernetes.io/master\" - label: \"node-role.kubernetes.io/infra\" - profile: \"openshift-node\" priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "apiVersion: poison-pill.medik8s.io/v1alpha1 kind: PoisonPillConfig metadata: name: poison-pill-config namespace: openshift-operators spec: safeTimeToAssumeNodeRebootedSeconds: 180 1 watchdogFilePath: /test/watchdog1 2 isSoftwareRebootEnabled: true 3 apiServerTimeout: 15s 4 apiCheckInterval: 5s 5 maxApiErrorThreshold: 3 6 peerApiServerTimeout: 5s 7 peerDialTimeout: 5s 8 peerRequestTimeout: 5s 9 peerUpdateInterval: 15m 10", "apiVersion: poison-pill.medik8s.io/v1alpha1 kind: PoisonPillRemediationTemplate metadata: creationTimestamp: \"2022-03-02T08:02:40Z\" generation: 1 name: poison-pill-default-template namespace: openshift-operators resourceVersion: \"596469\" uid: 5d29e437-c485-48fa-ba9e-0354649afd31 spec: template: spec: remediationStrategy: NodeDeletion 1", "apiVersion: v1 kind: Namespace metadata: name: poison-pill", "oc create -f poison-pill-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: poison-pill-manager namespace: poison-pill", "oc create -f poison-pill-operator-group.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: poison-pill-manager namespace: poison-pill 1 spec: channel: stable installPlanApproval: Manual 2 name: poison-pill-manager source: redhat-operators sourceNamespace: openshift-marketplace package: poison-pill-manager", "oc create -f poison-pill-subscription.yaml", "oc get csv -n poison-pill", "NAME DISPLAY VERSION REPLACES PHASE poison-pill.v.0.2.0 Poison Pill Operator 0.2.0 Succeeded", "oc get deploy -n poison-pill", "NAME READY UP-TO-DATE AVAILABLE AGE poison-pill-controller-manager 1/1 1 1 10d", "oc get PoisonPillConfig -n poison-pill", "NAME AGE poison-pill-config 10d", "oc get daemonset -n poison-pill", "NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE poison-pill-ds 2 2 2 2 2 <none> 10d", "apiVersion: poison-pill.medik8s.io/v1alpha1 kind: PoisonPillRemediationTemplate metadata: namespace: openshift-machine-api name: poisonpillremediationtemplate-sample spec: template: spec: {}", "oc create -f <ppr-name>.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: machine-health-check namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: \"worker\" machine.openshift.io/cluster-api-machine-type: \"worker\" unhealthyConditions: - type: \"Ready\" timeout: \"300s\" status: \"False\" - type: \"Ready\" timeout: \"300s\" status: \"Unknown\" maxUnhealthy: \"40%\" nodeStartupTimeout: \"10m\" remediationTemplate: 1 kind: PoisonPillRemediationTemplate apiVersion: poison-pill.medik8s.io/v1alpha1 name: poisonpillremediationtemplate-sample", "oc create -f <file-name>.yaml", "oc apply -f <file-name>.yaml", "oc get ppr -A", "oc delete ds <poison-pill-ds> -n <namespace>", "oc delete ppc <poison-pill-config> -n <namespace>", "oc delete pprt <poison-pill-remediation-template> -n <namespace>", "apiVersion: remediation.medik8s.io/v1alpha1 kind: NodeHealthCheck metadata: name: nodehealthcheck-sample spec: minHealthy: 51% 1 pauseRequests: 2 - <pause-test-cluster> remediationTemplate: 3 apiVersion: poison-pill.medik8s.io/v1alpha1 name: group-x namespace: openshift-operators kind: PoisonPillRemediationTemplate selector: 4 matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists unhealthyConditions: 5 - type: Ready status: \"False\" duration: 300s 6 - type: Ready status: Unknown duration: 300s 7", "INFO MHCChecker ignoring unhealthy Node, it is terminating and will be handled by MHC {\"NodeName\": \"node-1.example.com\"}", "INFO controllers.NodeHealthCheck disabling NHC in order to avoid conflict with custom MHCs configured in the cluster {\"NodeHealthCheck\": \"/nhc-worker-default\"}", "INFO controllers.NodeHealthCheck re-enabling NHC, no conflicting MHC configured in the cluster {\"NodeHealthCheck\": \"/nhc-worker-default\"}", "apiVersion: v1 kind: Namespace metadata: name: node-health-check", "oc create -f node-health-check-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: node-health-check-operator namespace: node-health-check", "oc create -f node-health-check-operator-group.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: node-health-check-operator namespace: node-health-check 1 spec: channel: candidate 2 installPlanApproval: Manual 3 name: node-healthcheck-operator source: redhat-operators sourceNamespace: openshift-marketplace package: node-healthcheck-operator", "oc create -f node-health-check-subscription.yaml", "oc get csv -n openshift-operators", "NAME DISPLAY VERSION REPLACES PHASE node-healthcheck-operator.v0.2.0. Node Health Check Operator 0.2.0 Succeeded", "oc get deploy -n openshift-operators", "NAME READY UP-TO-DATE AVAILABLE AGE node-health-check-operator-controller-manager 1/1 1 1 10d", "apiVersion: v1 kind: Namespace metadata: name: nmo-test", "oc create -f node-maintenance-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: node-maintenance-operator namespace: nmo-test", "oc create -f node-maintenance-operator-group.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: node-maintenance-operator namespace: nmo-test 1 spec: channel: stable InstallPlaneApproval: Automatic name: node-maintenance-operator source: redhat-operators sourceNamespace: openshift-marketplace StartingCSV: node-maintenance-operator.v4.10.0", "oc create -f node-maintenance-subscription.yaml", "oc get csv -n openshift-operators", "NAME DISPLAY VERSION REPLACES PHASE node-maintenance-operator.v4.10 Node Maintenance Operator 4.10 Succeeded", "oc get deploy -n openshift-operators", "NAME READY UP-TO-DATE AVAILABLE AGE node-maintenance-operator-controller-manager 1/1 1 1 10d", "apiVersion: nodemaintenance.medik8s.io/v1beta1 kind: NodeMaintenance metadata: name: nodemaintenance-cr 1 spec: nodeName: node-1.example.com 2 reason: \"NIC replacement\" 3", "oc apply -f nodemaintenance-cr.yaml", "oc describe node node-1.example.com", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeNotSchedulable 61m kubelet Node node-1.example.com status is now: NodeNotSchedulable", "oc get nm -o yaml", "apiVersion: v1 items: - apiVersion: nodemaintenance.medik8s.io/v1beta1 kind: NodeMaintenance metadata: spec: nodeName: node-1.example.com reason: Node maintenance status: evictionPods: 3 1 lastError: \"Last failure message\" 2 phase: Succeeded totalpods: 5 3", "oc delete -f nodemaintenance-cr.yaml", "nodemaintenance.nodemaintenance.medik8s.io \"maintenance-example\" deleted", "apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #", "oc adm cordon <node1>", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force", "error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction", "oc debug node/<node1>", "chroot /host", "systemctl reboot", "ssh core@<master-node>.<cluster_name>.<base_domain>", "sudo systemctl reboot", "oc adm uncordon <node1>", "ssh core@<target_node>", "sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #", "oc create -f <file_name>.yaml", "oc create -f gc-container.yaml", "kubeletconfig.machineconfiguration.openshift.io/gc-container created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "[Allocatable] = [Node Capacity] - [system-reserved] - [Hard-Eviction-Thresholds]", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #", "oc create -f <file_name>.yaml", "oc debug node/<node_name>", "chroot /host", "SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #", "oc create -f <file_name>.yaml", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: \"0,1,2,3\" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #", "oc create -f <file_name>.yaml", "apiVersion: config.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" #", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #", "oc create -f <filename>", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/kubernetes/kubelet.conf", "\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/nodes/working-with-nodes
Chapter 9. Network File System (NFS)
Chapter 9. Network File System (NFS) A Network File System ( NFS ) allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. This chapter focuses on fundamental NFS concepts and supplemental information. For specific instructions regarding the configuration and operation of NFS server and client software, refer to the chapter titled Network File System (NFS) in the System Administrators Guide . 9.1. How It Works Currently, there are three versions of NFS. NFS version 2 (NFSv2) is older and is widely supported. NFS version 3 (NFSv3) has more features, including variable size file handling and better error reporting, but is not fully compatible with NFSv2 clients. NFS version 4 (NFSv4) works through firewalls and on the Internet, no longer requires portmapper, supports ACLs, and utilizes stateful operations. Red Hat Enterprise Linux supports NFSv2, NFSv3, and NFSv4 clients, and when mounting a file system via NFS, Red Hat Enterprise Linux uses NFSv3 by default, if the server supports it. All versions of NFS can use Transmission Control Protocol ( TCP ) running over an IP network, with NFSv4 requiring it. NFSv2 and NFSv3 can use the User Datagram Protocol ( UDP ) running over an IP network to provide a stateless network connection between the client and server. When using NFSv2 or NFSv3 with UDP, the stateless UDP connection under normal conditions minimizes network traffic, as the NFS server sends the client a cookie after the client is authorized to access the shared volume. This cookie is a random value stored on the server's side and is passed along with RPC requests from the client. The NFS server can be restarted without affecting the clients and the cookie remains intact. However, because UDP is stateless, if the server goes down unexpectedly, UDP clients continue to saturate the network with requests for the server. For this reason, TCP is the preferred protocol when connecting to an NFS server. NFSv4 has no interaction with portmapper, rpc.mountd , rpc.lockd , and rpc.statd , since they have been rolled into the kernel. NFSv4 listens on the well known TCP port 2049. Note TCP is the default transport protocol for NFS under Red Hat Enterprise Linux. Refer to the chapter titled Network File System (NFS) in the System Administrators Guide for more information about connecting to NFS servers using TCP. UDP can be used for compatibility purposes as needed, but is not recommended for wide usage. The only time NFS performs authentication is when a client system attempts to mount the shared NFS resource. To limit access to the NFS service, TCP wrappers are used. TCP wrappers read the /etc/hosts.allow and /etc/hosts.deny files to determine if a particular client or network is permitted or denied access to the NFS service. For more information on configuring access controls with TCP wrappers, refer to Chapter 17, TCP Wrappers and xinetd . After the client is granted access by TCP wrappers, the NFS server refers to its configuration file, /etc/exports , to determine whether the client is allowed to access any of the exported file systems. Once access is granted, all file and directory operations are available to the user. Important In order for NFS to work with a default installation of Red Hat Enterprise Linux with a firewall enabled, IPTables with the default TCP port 2049 must be configured. Without an IPTables configuration, NFS does not function properly. The NFS initialization script and rpc.nfsd process now allow binding to any specified port during system start up. However, this can be error prone if the port is unavailable or conflicts with another daemon. 9.1.1. Required Services Red Hat Enterprise Linux uses a combination of kernel-level support and daemon processes to provide NFS file sharing. NFSv2 and NFSv3 rely on Remote Procedure Calls ( RPC ) to encode and decode requests between clients and servers. RPC services under Linux are controlled by the portmap service. To share or mount NFS file systems, the following services work together, depending on which version of NFS is implemented: nfs - Starts the appropriate RPC processes to service requests for shared NFS file systems. nfslock - An optional service that starts the appropriate RPC processes to allow NFS clients to lock files on the server. portmap - The RPC service for Linux; it responds to requests for RPC services and sets up connections to the requested RPC service. This is not used with NFSv4. The following RPC processes facilitate NFS services: rpc.mountd - This process receives mount requests from NFS clients and verifies the requested file system is currently exported. This process is started automatically by the nfs service and does not require user configuration. This is not used with NFSv4. rpc.nfsd - This process is the NFS server. It works with the Linux kernel to meet the dynamic demands of NFS clients, such as providing server threads each time an NFS client connects. This process corresponds to the nfs service. rpc.lockd - An optional process that allows NFS clients to lock files on the server. This process corresponds to the nfslock service. This is not used with NFSv4. rpc.statd - This process implements the Network Status Monitor (NSM) RPC protocol which notifies NFS clients when an NFS server is restarted without being gracefully brought down. This process is started automatically by the nfslock service and does not require user configuration. This is not used with NFSv4. rpc.rquotad - This process provides user quota information for remote users. This process is started automatically by the nfs service and does not require user configuration. rpc.idmapd - This process provides NFSv4 client and server upcalls which map between on-the-wire NFSv4 names (which are strings in the form of user@domain) and local UIDs and GIDs. For idmapd to function with NFSv4, the /etc/idmapd.conf must be configured. This service is required for use with NFSv4. rpc.svcgssd - This process is used by the NFS server to perform user authentication and is started only when SECURE_NFS=yes is set in the /etc/sysconfig/nfs file. rpc.gssd - This process is used by the NFS server to perform user authentication and is started only when SECURE_NFS=yes is set in the /etc/sysconfig/nfs file. 9.1.2. NFS and portmap Note The following section only applies to NFSv2 or NFSv3 implementations that require the portmap service for backward compatibility. The portmap service under Linux maps RPC requests to the correct services. RPC processes notify portmap when they start, revealing the port number they are monitoring and the RPC program numbers they expect to serve. The client system then contacts portmap on the server with a particular RPC program number. The portmap service redirects the client to the proper port number so it can communicate with the requested service. Because RPC-based services rely on portmap to make all connections with incoming client requests, portmap must be available before any of these services start. The portmap service uses TCP wrappers for access control, and access control rules for portmap affect all RPC-based services. Alternatively, it is possible to specify access control rules for each of the NFS RPC daemons. The man pages for rpc.mountd and rpc.statd contain information regarding the precise syntax for these rules. 9.1.2.1. Troubleshooting NFS and portmap Because portmap provides coordination between RPC services and the port numbers used to communicate with them, it is useful to view the status of current RPC services using portmap when troubleshooting. The rpcinfo command shows each RPC-based service with port numbers, an RPC program number, a version number, and an IP protocol type (TCP or UDP). To make sure the proper NFS RPC-based services are enabled for portmap , issue the following command as root: The following is sample output from this command: The output from this command reveals that the correct NFS services are running. If one of the NFS services does not start up correctly, portmap is unable to map RPC requests from clients for that service to the correct port. In many cases, if NFS is not present in rpcinfo output, restarting NFS causes the service to correctly register with portmap and begin working. For instructions on starting NFS, refer to Section 9.2, "Starting and Stopping NFS" . Other useful options are available for the rpcinfo command. Refer to the rpcinfo man page for more information.
[ "rpcinfo -p", "program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100021 1 udp 32774 nlockmgr 100021 3 udp 32774 nlockmgr 100021 4 udp 32774 nlockmgr 100021 1 tcp 34437 nlockmgr 100021 3 tcp 34437 nlockmgr 100021 4 tcp 34437 nlockmgr 100011 1 udp 819 rquotad 100011 2 udp 819 rquotad 100011 1 tcp 822 rquotad 100011 2 tcp 822 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100005 1 udp 836 mountd 100005 1 tcp 839 mountd 100005 2 udp 836 mountd 100005 2 tcp 839 mountd 100005 3 udp 836 mountd 100005 3 tcp 839 mountd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/ch-nfs
Chapter 5. TemplateInstance [template.openshift.io/v1]
Chapter 5. TemplateInstance [template.openshift.io/v1] Description TemplateInstance requests and records the instantiation of a Template. TemplateInstance is part of an experimental API. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object TemplateInstanceSpec describes the desired state of a TemplateInstance. status object TemplateInstanceStatus describes the current state of a TemplateInstance. 5.1.1. .spec Description TemplateInstanceSpec describes the desired state of a TemplateInstance. Type object Required template Property Type Description requester object TemplateInstanceRequester holds the identity of an agent requesting a template instantiation. secret LocalObjectReference_v2 secret is a reference to a Secret object containing the necessary template parameters. template object Template contains the inputs needed to produce a Config. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 5.1.2. .spec.requester Description TemplateInstanceRequester holds the identity of an agent requesting a template instantiation. Type object Property Type Description extra object extra holds additional information provided by the authenticator. extra{} array (string) groups array (string) groups represent the groups this user is a part of. uid string uid is a unique value that identifies this user across time; if this user is deleted and another user by the same name is added, they will have different UIDs. username string username uniquely identifies this user among all active users. 5.1.3. .spec.requester.extra Description extra holds additional information provided by the authenticator. Type object 5.1.4. .spec.template Description Template contains the inputs needed to produce a Config. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required objects Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds labels object (string) labels is a optional set of labels that are applied to every object during the Template to Config transformation. message string message is an optional instructional message that will be displayed when this template is instantiated. This field should inform the user how to utilize the newly created resources. Parameter substitution will be performed on the message before being displayed so that generated credentials and other parameters can be included in the output. metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata objects array (RawExtension) objects is an array of resources to include in this template. If a namespace value is hardcoded in the object, it will be removed during template instantiation, however if the namespace value is, or contains, a USD{PARAMETER_REFERENCE}, the resolved value after parameter substitution will be respected and the object will be created in that namespace. parameters array parameters is an optional array of Parameters used during the Template to Config transformation. parameters[] object Parameter defines a name/value variable that is to be processed during the Template to Config transformation. 5.1.5. .spec.template.parameters Description parameters is an optional array of Parameters used during the Template to Config transformation. Type array 5.1.6. .spec.template.parameters[] Description Parameter defines a name/value variable that is to be processed during the Template to Config transformation. Type object Required name Property Type Description description string Description of a parameter. Optional. displayName string Optional: The name that will show in UI instead of parameter 'Name' from string From is an input value for the generator. Optional. generate string generate specifies the generator to be used to generate random string from an input value specified by From field. The result string is stored into Value field. If empty, no generator is being used, leaving the result Value untouched. Optional. The only supported generator is "expression", which accepts a "from" value in the form of a simple regular expression containing the range expression "[a-zA-Z0-9]", and the length expression "a{length}". Examples: from | value ----------------------------- "test[0-9]{1}x" | "test7x" "[0-1]{8}" | "01001100" "0x[A-F0-9]{4}" | "0xB3AF" "[a-zA-Z0-9]{8}" | "hW4yQU5i" name string Name must be set and it can be referenced in Template Items using USD{PARAMETER_NAME}. Required. required boolean Optional: Indicates the parameter must have a value. Defaults to false. value string Value holds the Parameter data. If specified, the generator will be ignored. The value replaces all occurrences of the Parameter USD{Name} expression during the Template to Config transformation. Optional. 5.1.7. .status Description TemplateInstanceStatus describes the current state of a TemplateInstance. Type object Property Type Description conditions array conditions represent the latest available observations of a TemplateInstance's current state. conditions[] object TemplateInstanceCondition contains condition information for a TemplateInstance. objects array Objects references the objects created by the TemplateInstance. objects[] object TemplateInstanceObject references an object created by a TemplateInstance. 5.1.8. .status.conditions Description conditions represent the latest available observations of a TemplateInstance's current state. Type array 5.1.9. .status.conditions[] Description TemplateInstanceCondition contains condition information for a TemplateInstance. Type object Required type status lastTransitionTime reason message Property Type Description lastTransitionTime Time LastTransitionTime is the last time a condition status transitioned from one state to another. message string Message is a human readable description of the details of the last transition, complementing reason. reason string Reason is a brief machine readable explanation for the condition's last transition. status string Status of the condition, one of True, False or Unknown. type string Type of the condition, currently Ready or InstantiateFailure. 5.1.10. .status.objects Description Objects references the objects created by the TemplateInstance. Type array 5.1.11. .status.objects[] Description TemplateInstanceObject references an object created by a TemplateInstance. Type object Property Type Description ref ObjectReference ref is a reference to the created object. When used under .spec, only name and namespace are used; these can contain references to parameters which will be substituted following the usual rules. 5.2. API endpoints The following API endpoints are available: /apis/template.openshift.io/v1/templateinstances GET : list or watch objects of kind TemplateInstance /apis/template.openshift.io/v1/watch/templateinstances GET : watch individual changes to a list of TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances DELETE : delete collection of TemplateInstance GET : list or watch objects of kind TemplateInstance POST : create a TemplateInstance /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templateinstances GET : watch individual changes to a list of TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances/{name} DELETE : delete a TemplateInstance GET : read the specified TemplateInstance PATCH : partially update the specified TemplateInstance PUT : replace the specified TemplateInstance /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templateinstances/{name} GET : watch changes to an object of kind TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances/{name}/status GET : read status of the specified TemplateInstance PATCH : partially update status of the specified TemplateInstance PUT : replace status of the specified TemplateInstance 5.2.1. /apis/template.openshift.io/v1/templateinstances HTTP method GET Description list or watch objects of kind TemplateInstance Table 5.1. HTTP responses HTTP code Reponse body 200 - OK TemplateInstanceList schema 401 - Unauthorized Empty 5.2.2. /apis/template.openshift.io/v1/watch/templateinstances HTTP method GET Description watch individual changes to a list of TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. Table 5.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances HTTP method DELETE Description delete collection of TemplateInstance Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.4. HTTP responses HTTP code Reponse body 200 - OK Status_v9 schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind TemplateInstance Table 5.5. HTTP responses HTTP code Reponse body 200 - OK TemplateInstanceList schema 401 - Unauthorized Empty HTTP method POST Description create a TemplateInstance Table 5.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.7. Body parameters Parameter Type Description body TemplateInstance schema Table 5.8. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 202 - Accepted TemplateInstance schema 401 - Unauthorized Empty 5.2.4. /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templateinstances HTTP method GET Description watch individual changes to a list of TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. Table 5.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances/{name} Table 5.10. Global path parameters Parameter Type Description name string name of the TemplateInstance HTTP method DELETE Description delete a TemplateInstance Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.12. HTTP responses HTTP code Reponse body 200 - OK Status_v9 schema 202 - Accepted Status_v9 schema 401 - Unauthorized Empty HTTP method GET Description read the specified TemplateInstance Table 5.13. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified TemplateInstance Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.15. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified TemplateInstance Table 5.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.17. Body parameters Parameter Type Description body TemplateInstance schema Table 5.18. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 401 - Unauthorized Empty 5.2.6. /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templateinstances/{name} Table 5.19. Global path parameters Parameter Type Description name string name of the TemplateInstance HTTP method GET Description watch changes to an object of kind TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.7. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances/{name}/status Table 5.21. Global path parameters Parameter Type Description name string name of the TemplateInstance HTTP method GET Description read status of the specified TemplateInstance Table 5.22. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified TemplateInstance Table 5.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.24. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified TemplateInstance Table 5.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.26. Body parameters Parameter Type Description body TemplateInstance schema Table 5.27. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/template_apis/templateinstance-template-openshift-io-v1
10.5.2. ServerRoot
10.5.2. ServerRoot The ServerRoot directive specifies the top-level directory containing website content. By default, ServerRoot is set to "/etc/httpd" for both secure and non-secure servers.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-serverroot
5.5. Hot Plugging vCPUs
5.5. Hot Plugging vCPUs You can hot plug vCPUs. Hot plugging means enabling or disabling devices while a virtual machine is running. Important Hot unplugging a vCPU is only supported if the vCPU was previously hot plugged. A virtual machine's vCPUs cannot be hot unplugged to less vCPUs than it was originally created with. The following prerequisites apply: The virtual machine's Operating System must be explicitly set in the New Virtual Machine or Edit Virtual Machine window. The virtual machine's operating system must support CPU hot plug. See the table below for support details. Windows virtual machines must have the guest agents installed. See Section 3.3.2, "Installing the Guest Agents, Tools, and Drivers on Windows" . Hot Plugging vCPUs Click Compute Virtual Machines and select a running virtual machine. Click Edit . Click the System tab. Change the value of Virtual Sockets as required. Click OK . Table 5.1. Operating System Support Matrix for vCPU Hot Plug Operating System Version Architecture Hot Plug Supported Hot Unplug Supported Red Hat Enterprise Linux Atomic Host 7 x86 Yes Yes Red Hat Enterprise Linux 6.3+ x86 Yes Yes Red Hat Enterprise Linux 7.0+ x86 Yes Yes Red Hat Enterprise Linux 7.3+ PPC64 Yes Yes Red Hat Enterprise Linux 8.0+ x86 Yes Yes Microsoft Windows Server 2008 All x86 No No Microsoft Windows Server 2008 Standard, Enterprise x64 No No Microsoft Windows Server 2008 Datacenter x64 Yes No Microsoft Windows Server 2008 R2 All x86 No No Microsoft Windows Server 2008 R2 Standard, Enterprise x64 No No Microsoft Windows Server 2008 R2 Datacenter x64 Yes No Microsoft Windows Server 2012 All x64 Yes No Microsoft Windows Server 2012 R2 All x64 Yes No Microsoft Windows Server 2016 Standard, Datacenter x64 Yes No Microsoft Windows 7 All x86 No No Microsoft Windows 7 Starter, Home, Home Premium, Professional x64 No No Microsoft Windows 7 Enterprise, Ultimate x64 Yes No Microsoft Windows 8.x All x86 Yes No Microsoft Windows 8.x All x64 Yes No Microsoft Windows 10 All x86 Yes No Microsoft Windows 10 All x64 Yes No
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/cpu_hot_plug
11.2. Configuring the Kerberos KDC
11.2. Configuring the Kerberos KDC Install the master KDC first and then install any necessary secondary servers after the master is set up. Important Setting up Kerberos KDC manually is not recommended. The recommended way to introduce Kerberos into Red Hat Enterprise Linux environments is to use the Identity Management feature. 11.2.1. Configuring the Master KDC Server Important The KDC system should be a dedicated machine. This machine needs to be very secure - if possible, it should not run any services other than the KDC. Install the required packages for the KDC: Edit the /etc/krb5.conf and /var/kerberos/krb5kdc/kdc.conf configuration files to reflect the realm name and domain-to-realm mappings. For example: A simple realm can be constructed by replacing instances of EXAMPLE.COM and example.com with the correct domain name - being certain to keep uppercase and lowercase names in the correct format - and by changing the KDC from kerberos.example.com to the name of the Kerberos server. By convention, all realm names are uppercase and all DNS host names and domain names are lowercase. The man pages of these configuration files have full details about the file formats. Create the database using the kdb5_util utility. The create command creates the database that stores keys for the Kerberos realm. The -s argument creates a stash file in which the master server key is stored. If no stash file is present from which to read the key, the Kerberos server ( krb5kdc ) prompts the user for the master server password (which can be used to regenerate the key) every time it starts. Edit the /var/kerberos/krb5kdc/kadm5.acl file. This file is used by kadmind to determine which principals have administrative access to the Kerberos database and their level of access. For example: Most users are represented in the database by a single principal (with a NULL , or empty, instance, such as [email protected] ). In this configuration, users with a second principal with an instance of admin (for example, joe/[email protected] ) are able to exert full administrative control over the realm's Kerberos database. After kadmind has been started on the server, any user can access its services by running kadmin on any of the clients or servers in the realm. However, only users listed in the kadm5.acl file can modify the database in any way, except for changing their own passwords. Note The kadmin utility communicates with the kadmind server over the network, and uses Kerberos to handle authentication. Consequently, the first principal must already exist before connecting to the server over the network to administer it. Create the first principal with the kadmin.local command, which is specifically designed to be used on the same host as the KDC and does not use Kerberos for authentication. Create the first principal using kadmin.local at the KDC terminal: Start Kerberos using the following commands: Add principals for the users using the addprinc command within kadmin . kadmin and kadmin.local are command line interfaces to the KDC. As such, many commands - such as addprinc - are available after launching the kadmin program. Refer to the kadmin man page for more information. Verify that the KDC is issuing tickets. First, run kinit to obtain a ticket and store it in a credential cache file. , use klist to view the list of credentials in the cache and use kdestroy to destroy the cache and the credentials it contains. Note By default, kinit attempts to authenticate using the same system login user name (not the Kerberos server). If that user name does not correspond to a principal in the Kerberos database, kinit issues an error message. If that happens, supply kinit with the name of the correct principal as an argument on the command line: 11.2.2. Setting up Secondary KDCs When there are multiple KDCs for a given realm, one KDC (the master KDC ) keeps a writable copy of the realm database and runs kadmind . The master KDC is also the realm's admin server . Additional secondary KDCs keep read-only copies of the database and run kpropd . The master and slave propagation procedure entails the master KDC dumping its database to a temporary dump file and then transmitting that file to each of its slaves, which then overwrite their previously received read-only copies of the database with the contents of the dump file. To set up a secondary KDC: Install the required packages for the KDC: Copy the master KDC's krb5.conf and kdc.conf files to the secondary KDC. Start kadmin.local from a root shell on the master KDC. Use the kadmin.local add_principal command to create a new entry for the master KDC's host service. [root@masterkdc ~]# kadmin.local -r EXAMPLE.COM Authenticating as principal root/[email protected] with password. kadmin: add_principal -randkey host/masterkdc.example.com Principal "host/[email protected]" created. kadmin: ktadd host/masterkdc.example.com Entry for principal host/masterkdc.example.com with kvno 3, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/masterkdc.example.com with kvno 3, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/masterkdc.example.com with kvno 3, encryption type DES with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/masterkdc.example.com with kvno 3, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:/etc/krb5.keytab. kadmin: quit Use the kadmin.local ktadd command to set a random key for the service and store the random key in the master's default keytab file. Note This key is used by the kprop command to authenticate to the secondary servers. You will only need to do this once, regardless of how many secondary KDC servers you install. Start kadmin from a root shell on the secondary KDC. Use the kadmin.local add_principal command to create a new entry for the secondary KDC's host service. [root@slavekdc ~]# kadmin -p jsmith/[email protected] -r EXAMPLE.COM Authenticating as principal jsmith/[email protected] with password. Password for jsmith/[email protected]: kadmin: add_principal -randkey host/slavekdc.example.com Principal "host/[email protected]" created. kadmin: ktadd host/[email protected] Entry for principal host/slavekdc.example.com with kvno 3, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/slavekdc.example.com with kvno 3, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/slavekdc.example.com with kvno 3, encryption type DES with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/slavekdc.example.com with kvno 3, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:/etc/krb5.keytab. kadmin: quit Use the kadmin.local ktadd command to set a random key for the service and store the random key in the secondary KDC server's default keytab file. This key is used by the kpropd service when authenticating clients. With its service key, the secondary KDC could authenticate any client which would connect to it. Obviously, not all potential clients should be allowed to provide the kprop service with a new realm database. To restrict access, the kprop service on the secondary KDC will only accept updates from clients whose principal names are listed in /var/kerberos/krb5kdc/kpropd.acl . Add the master KDC's host service's name to that file. Once the secondary KDC has obtained a copy of the database, it will also need the master key which was used to encrypt it. If the KDC database's master key is stored in a stash file on the master KDC (typically named /var/kerberos/krb5kdc/.k5.REALM ), either copy it to the secondary KDC using any available secure method, or create a dummy database and identical stash file on the secondary KDC by running kdb5_util create -s and supplying the same password. The dummy database will be overwritten by the first successful database propagation. Ensure that the secondary KDC's firewall allows the master KDC to contact it using TCP on port 754 ( krb5_prop ), and start the kprop service. Verify that the kadmin service is disabled . Perform a manual database propagation test by dumping the realm database on the master KDC to the default data file which the kprop command will read ( /var/kerberos/krb5kdc/slave_datatrans ). [root@masterkdc ~]# kdb5_util dump /var/kerberos/krb5kdc/slave_datatrans Use the kprop command to transmit its contents to the secondary KDC. [root@masterkdc ~]# kprop slavekdc.example.com Using kinit , verify that the client system is able to correctly obtain the initial credentials from the KDC. The /etc/krb5.conf for the client should list only the secondary KDC in its list of KDCs. Create a script which dumps the realm database and runs the kprop command to transmit the database to each secondary KDC in turn, and configure the cron service to run the script periodically. 11.2.3. Kerberos Key Distribution Center Proxy Some administrators might choose to make the default Kerberos ports inaccessible in their deployment. To allow users, hosts, and services to obtain Kerberos credentials, you can use the HTTPS service as a proxy that communicates with Kerberos via the HTTPS port 443. In Identity Management (IdM), the Kerberos Key Distribution Center Proxy (KKDCP) provides this functionality. KKDCP Server On an IdM server, KKDCP is enabled by default. The KKDCP is automatically enabled each time the Apache web server starts, if the attribute and value pair ipaConfigString=kdcProxyEnabled exists in the directory. In this situation, the symbolic link /etc/httpd/conf.d/ipa-kdc-proxy.conf is created. Thus, you can verify that KKDCP is enabled on an IdM Server by checking that the symbolic link exists. See the example server configurations below for more details. Example 11.1. Configuring the KKDCP server I Using the following example configuration, you can enable TCP to be used as the transport protocol between the IdM KKDCP and the Active Directory realm, where multiple Kerberos servers are used: In the /etc/ipa/kdcproxy/kdcproxy.conf file, set the use_dns parameter in the [global] section to false : Put the proxied realm information into the /etc/ipa/kdcproxy/kdcproxy.conf file. For the [AD. EXAMPLE.COM ] realm with proxy, for example, list the realm configuration parameters as follows: Important The realm configuration parameters must list multiple servers separated by a space, as opposed to /etc/krb5.conf and kdc.conf , in which certain options may be specified multiple times. Restart IdM services: Example 11.2. Configuring the KKDCP server II This example server configuration relies on the DNS service records to find AD servers to communicate with. In the /etc/ipa/kdcproxy/kdcproxy.conf file, the [global] section, set the use_dns parameter to true : The configs parameter allows you to load other configuration modules. In this case, the configuration is read from the MIT libkrb5 library. Optional: In case you do not want to use DNS service records, add explicit AD servers to the [realms] section of the /etc/krb5.conf file. If the realm with proxy is, for example, AD. EXAMPLE.COM , you add: Restart IdM services: KKDCP Client Client systems point to the KDC proxies through their /etc/krb5.conf files. Follow this procedure to reach the AD server. On the client, open the /etc/krb5.conf file, and add the name of the AD realm to the [realms] section: Open the /etc/sssd/sssd.conf file, and add the krb5_use_kdcinfo = False line to your IdM domain section: Restart the SSSD service: Additional Resources For details on configuring KKDCP for an Active Directory realm, see Configure IPA server as a KDC Proxy for AD Kerberos communication in Red Hat Knowledgebase.
[ "yum install krb5-server krb5-libs krb5-workstation", "[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EXAMPLE.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true allow_weak_crypto = true [realms] EXAMPLE.COM = { kdc = kdc.example.com.:88 admin_server = kdc.example.com default_domain = example.com } [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM", "kdb5_util create -s", "*/[email protected] *", "kadmin.local -q \"addprinc username /admin\"", "systemctl start krb5kdc.service systemctl start kadmin.service", "kinit principal", "yum install krb5-server krb5-libs krb5-workstation", "kadmin.local -r EXAMPLE.COM Authenticating as principal root/[email protected] with password. kadmin: add_principal -randkey host/masterkdc.example.com Principal \"host/[email protected]\" created. kadmin: ktadd host/masterkdc.example.com Entry for principal host/masterkdc.example.com with kvno 3, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/masterkdc.example.com with kvno 3, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/masterkdc.example.com with kvno 3, encryption type DES with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/masterkdc.example.com with kvno 3, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:/etc/krb5.keytab. kadmin: quit", "kadmin -p jsmith/[email protected] -r EXAMPLE.COM Authenticating as principal jsmith/[email protected] with password. Password for jsmith/[email protected]: kadmin: add_principal -randkey host/slavekdc.example.com Principal \"host/[email protected]\" created. kadmin: ktadd host/[email protected] Entry for principal host/slavekdc.example.com with kvno 3, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/slavekdc.example.com with kvno 3, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/slavekdc.example.com with kvno 3, encryption type DES with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/slavekdc.example.com with kvno 3, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:/etc/krb5.keytab. kadmin: quit", "echo host/[email protected] > /var/kerberos/krb5kdc/kpropd.acl", "kdb5_util dump /var/kerberos/krb5kdc/slave_datatrans", "kprop slavekdc.example.com", "[realms] EXAMPLE.COM = { kdc = slavekdc.example.com.:88 admin_server = kdc.example.com default_domain = example.com }", "ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf lrwxrwxrwx. 1 root root 36 Aug 15 09:37 /etc/httpd/conf.d/ipa-kdc-proxy.conf -> /etc/ipa/kdcproxy/ipa-kdc-proxy.conf", "[global] use_dns = false", "[AD. EXAMPLE.COM ] kerberos = kerberos+tcp://1.2.3.4:88 kerberos+tcp://5.6.7.8:88 kpasswd = kpasswd+tcp://1.2.3.4:464 kpasswd+tcp://5.6.7.8:464", "ipactl restart", "[global] configs = mit use_dns = true", "[realms] AD. EXAMPLE.COM = { kdc = ad-server.ad.example.com kpasswd_server = ad-server.ad.example.com }", "ipactl restart", "[realms] AD. EXAMPLE.COM { kdc = https://ipa-server.example.com/KdcProxy kdc = https://ipa-server2.example.com/KdcProxy kpasswd_server = https://ipa-server.example.com/KdcProxy kpasswd_server = https://ipa-server2.example.com/KdcProxy }", "[domain/ example.com ] krb5_use_kdcinfo = False", "systemctl restart sssd.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/configuring_a_kerberos_5_server
function::read_stopwatch_ns
function::read_stopwatch_ns Name function::read_stopwatch_ns - Reads the time in nanoseconds for a stopwatch Synopsis Arguments name stopwatch name Description Returns time in nanoseconds for stopwatch name . Creates stopwatch name if it does not currently exist.
[ "read_stopwatch_ns:long(name:string)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-read-stopwatch-ns
Chapter 4. Using AMQ Management Console
Chapter 4. Using AMQ Management Console AMQ Management Console is a web console included in the AMQ Broker installation that enables you to use a web browser to manage AMQ Broker. AMQ Management Console is based on hawtio . 4.1. Overview AMQ Broker is a full-featured, message-oriented middleware broker. It offers specialized queueing behaviors, message persistence, and manageability. It supports multiple protocols and client languages, freeing you to use many of your application assets. AMQ Broker's key features allow you to: monitor your AMQ brokers and clients view the topology view network health at a glance manage AMQ brokers using: AMQ Management Console Command-line Interface (CLI) Management API The supported web browsers for AMQ Management Console are Firefox and Chrome. For more information on supported browser versions, see AMQ 7 Supported Configurations . 4.2. Configuring local and remote access to AMQ Management Console The procedure in this section shows how to configure local and remote access to AMQ Management Console. Remote access to the console can take one of two forms: Within a console session on a local broker, you use the Connect tab to connect to another, remote broker From a remote host, you connect to the console for the local broker, using an externally-reachable IP address for the local broker Prerequisites You must upgrade to at least AMQ Broker 7.1.0. As part of this upgrade, an access-management configuration file named jolokia-access.xml is added to the broker instance. For more information about upgrading, see Upgrading a Broker instance from 7.0.x to 7.1.0 . Procedure Open the <broker_instance_dir> /etc/bootstrap.xml file. Within the web element, observe that the web port is bound only to localhost by default. <web path="web"> <binding uri="http://localhost:8161"> <app url="redhat-branding" war="redhat-branding.war"/> <app url="artemis-plugin" war="artemis-plugin.war"/> <app url="dispatch-hawtio-console" war="dispatch-hawtio-console.war"/> <app url="console" war="console.war"/> </binding> </web> To enable connection to the console for the local broker from a remote host, change the web port binding to a network-reachable interface. For example: <web path="web"> <binding uri="http://0.0.0.0:8161"> In the preceding example, by specifying 0.0.0.0 , you bind the web port to all interfaces on the local broker. Save the bootstrap.xml file. Open the <broker_instance_dir> /etc/jolokia-access.xml file. Within the <cors> (that is, Cross-Origin Resource Sharing ) element, add an allow-origin entry for each HTTP origin request header that you want to allow to access the console. For example: <cors> <allow-origin>*://localhost*</allow-origin> <allow-origin>*://192.168.0.49*</allow-origin> <allow-origin>*://192.168.0.51*</allow-origin> <!-- Check for the proper origin on the server side, too --> <strict-checking/> </cors> In the preceding configuration, you specify that the following connections are allowed: Connection from the local host (that is, the host machine for your local broker instance) to the console. The first asterisk ( * ) wildcard character allows either the http or https scheme to be specified in the connection request, based on whether you have configured the console for secure connections. The second asterisk wildcard character allows any port on the host machine to be used for the connection. Connection from a remote host to the console for the local broker, using the externally-reachable IP address of the local broker. In this case, the externally-reachable IP address of the local broker is 192.168.0.49 . Connection from within a console session opened on another, remote broker to the local broker. In this case, the IP address of the remote broker is 192.168.0.51 . Save the jolokia-access.xml file. Open the <broker_instance_dir> /etc/artemis.profile file. To enable the Connect tab in the console, set the value of the Dhawtio.disableProxy argument to false . -Dhawtio.disableProxy=false Important It is recommended that you enable remote connections from the console (that is, set the value of the Dhawtio.disableProxy argument to false ) only if the console is exposed to a secure network. Add a new argument, Dhawtio.proxyWhitelist , to the JAVA_ARGS list of Java system arguments. As a comma-separated list, specify IP addresses for any remote brokers that you want to connect to from the local broker (that is, by using the Connect tab within a console session running on the local broker). For example: -Dhawtio.proxyWhitelist=192.168.0.51 Based on the preceding configuration, you can use the Connect tab within a console session on the local broker to connect to another, remote broker with an IP address of 192.168.0.51 . Save the aretmis.profile file. Additional resources To learn how to access the console, see Section 4.3, "Accessing AMQ Management Console" . For more information about: Cross-Origin Resource Sharing, see W3C Recommendations . Jolokia security, see Jolokia Protocols . Securing connections to the console, see Section 4.4.3, "Securing network access to AMQ Management Console" . 4.3. Accessing AMQ Management Console The procedure in this section shows how to: Open AMQ Management Console from the local broker Connect to other brokers from within a console session on the local broker Open a console instance for the local broker from a remote host using the externally-reachable IP address of the local broker Prerequisites You must have already configured local and remote access to the console. For more information, see Section 4.2, "Configuring local and remote access to AMQ Management Console" . Procedure In your web browser, navigate to the console address for the local broker. The console address is http:// <host:port> /console/login . If you are using the default address, navigate to http://localhost:8161/console/login . Otherwise, use the values of host and port that are defined for the bind attribute of the web element in the <broker_instance_dir> /etc/bootstrap.xml configuration file. Figure 4.1. Console login page Log in to AMQ Management Console using the default user name and password that you created when you created the broker. To connect to another, remote broker from the console session of the local broker: In the left menu, click the Connect tab. In the main pane, on the Remote tab, click the Add connection button. In the Add Connection dialog box, specify the following details: Name Name for the remote connection, for example, my_other_broker . Scheme Protocol to use for the remote connection. Select http for a non-secured connection, or https for a secured connection. Host IP address of a remote broker. You must have already configured console access for this remote broker. Port Port on the local broker to use for the remote connection. Specify the port value that is defined for the bind attribute of the web element in the <broker_instance_dir> /etc/bootstrap.xml configuration file. The default value is 8161 . Path Path to use for console access. Specify console/jolokia . To test the connection, click the Test Connection button. If the connection test is successful, click the Add button. If the connection test fails, review and modify the connection details as needed. Test the connection again. On the Remote page, for a connection that you have added, click the Connect button. A new web browser tab opens for the console instance on the remote broker. In the Log In dialog box, enter the user name and password for the remote broker. Click Log In . The console instance for the remote broker opens. To connect to the console for the local broker from a remote host, specify the Jolokia endpoint for the local broker in a web browser. This endpoint includes the externally-reachable IP address that you specified for the local broker when configuring remote console access. For example: 4.4. Configuring AMQ Management Console Configure user access and request access to resources on the broker. 4.4.1. Securing AMQ Management Console using Red Hat Single Sign-On Prerequisites Red Hat Single Sign-On 7.4 Procedure Configure Red Hat Single Sign-On: Navigate to the realm in Red Hat Single Sign-On that you want to use for securing AMQ Management Console. Each realm in Red Hat Single Sign-On includes a client named Broker . This client is not related to AMQ. Create a new client in Red Hat Single Sign-On, for example artemis-console . Navigate to the client settings page and set: Valid Redirect URIs to the AMQ Management Console URL followed by * , for example: Web Origins to the same value as Valid Redirect URIs . Red Hat Single Sign-On allows you enter + , indicating that allowed CORS origins includes the value for Valid Redirect URIs . Create a role for the client, for example guest . Make sure all users who require access to AMQ Management Console are assigned the above role, for example, using Red Hat Single Sign-On groups. Configure the AMQ Broker instance: Add the following to your <broker-instance-dir> /instances/broker0/etc/login.config file to configure AMQ Management Console to use Red Hat Single Sign-On: Adding this configuration sets up a JAAS principal and a requirement for a bearer token from Red Hat Single Sign-On. The connection to Red Hat Single Sign-On is defined in the keycloak-bearer-token.json file, as described in the step. Create a file <broker-instance-dir> /etc/keycloak-bearer-token.json with the following contents to specify the connection to Red Hat Single Sign-On used for the bearer token exchange: { "realm": " <realm-name> ", "resource": " <client-name> ", "auth-server-url": " <RHSSO-URL> /auth", "principal-attribute": "preferred_username", "use-resource-role-mappings": true, "ssl-required": "external", "confidential-port": 0 } <realm-name> the name of the realm in Red Hat Single Sign-On <client-name> the name of the client in Red Hat Single Sign-On <RHSSO-URL> the URL of Red Hat Single Sign-On Create a file <broker-instance-dir> /etc/keycloak-js-token.json with the following contents to specify the Red Hat Single Sign-On authentication endpoint: { "realm": "<realm-name>", "clientId": "<client-name>", "url": " <RHSSO-URL> /auth" } Configure the security settings by editing the <broker-instance-dir> /etc/broker.xml file. For example, to allow users with the amq role consume messages and allow users with the guest role send messages, add the following: <security-setting match="Info"> <permission roles="amq" type="createDurableQueue"/> <permission roles="amq" type="deleteDurableQueue"/> <permission roles="amq" type="createNonDurableQueue"/> <permission roles="amq" type="deleteNonDurableQueue"/> <permission roles="guest" type="send"/> <permission roles="amq" type="consume"/> </security-setting> Run the AMQ Broker instance and validate AMQ Management Console configuration. 4.4.2. Setting up user access to AMQ Management Console You can access AMQ Management Console using the broker login credentials. The following table provides information about different methods to add additional broker users to access AMQ Management Console: Authentication Method Description Guest authentication Enables anonymous access. In this configuration, any user who connects without credentials or with the wrong credentials will be authenticated automatically and assigned a specific user and role. For more information, see Configuring guest access in Configuring AMQ Broker . Basic user and password authentication For each user, you must define a username and password and assign a security role. Users can only log into AMQ Management Console using these credentials. For more information, see Configuring basic user and password authentication in Configuring AMQ Broker . LDAP authentication Users are authenticated and authorized by checking the credentials against user data stored in a central X.500 directory server. For more information, see Configuring LDAP to authenticate clients in Configuring AMQ Broker . 4.4.3. Securing network access to AMQ Management Console To secure AMQ Management Console when the console is being accessed over a WAN or the internet, use SSL to specify that network access uses https instead of http . Prerequisites The following should be located in the <broker_instance_dir> /etc/ directory: Java key store Java trust store (needed only if you require client authentication) Procedure Open the <broker_instance_dir> /etc/bootstrap.xml file. In the <web> element, add the following attributes: <web path="web"> <binding uri="https://0.0.0.0:8161" keyStorePath="<path_to_keystore>" keyStorePassword="<password>" clientAuth="<true/false>" trustStorePath="<path_to_truststore>" trustStorePassword="<password>"> </binding> </web> bind For secure connections to the console, change the URI scheme to https . keyStorePath Path of the keystore file. For example: keyStorePath=" <broker_instance_dir> /etc/keystore.jks" keyStorePassword Key store password. This password can be encrypted. clientAuth Specifies whether client authentication is required. The default value is false . trustStorePath Path of the trust store file. You need to define this attribute only if clientAuth is set to true . trustStorePassword Trust store password. This password can be encrypted. Additional resources For more information about encrypting passwords in broker configuration files, including bootstrap.xml , see Encrypting Passwords in Configuration Files . 4.4.4. Configuring AMQ Management Console to use certificate-based authentication You can configure AMQ Management Console to authenticate users by using certificates instead of passwords. Procedure Obtain certificates for the broker and clients from a trusted certificate authority or generate self-signed certificates. If you want to generate self-signed certificates, complete the following steps: Generate a self-signed certificate for the broker. USD keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg "RSA" -keysize 2048 -dname "CN=ActiveMQ Broker, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ" -ext bc=ca:false -ext eku=cA Export the certificate from the broker keystore, so that it can be shared with clients. USD keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -alias client -exportcert -rfc > broker.crt On the client, import the broker certificate into the client truststore. USD keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file broker.crt -noprompt On the client, generate a self-signed certificate for the client. USD keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg "RSA" -keysize 2048 -dname "CN=ActiveMQ Client, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ" -ext bc=ca:false -ext eku=cA Export the client certificate from the client keystore to a file so that it can be added to the broker truststore. USD keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -alias client -exportcert -rfc > client.crt Import the client certificate into the broker truststore. USD keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file client.crt -noprompt Note On the broker machine, ensure that the keystore and truststore files are in a location that is accessible to the broker. In the <broker_instance_dir>/etc/bootstrap.xml file, update the web configuration to enable the HTTPS protocol and client authentication for the broker console. For example: ... <web path="web"> <binding uri="https://localhost:8161" keyStorePath="USD{artemis.instance}/etc/server-keystore.p12" keyStorePassword="password" clientAuth="true" trustStorePath="USD{artemis.instance}/etc/client-truststore.p12" trustStorePassword="password"> ... </binding> </web> ... binding uri Specify the https protocol to enable SSL and add a host name and port. keystorePath The path to the keystore where the broker certificate is installed. keystorePassword The password of the keystore where the broker certificate is installed. ClientAuth Set to true to configure the broker to require that each client presents a certificate when a client tries to connect to the broker console. trustStorePath If clients are using self-signed certificates, specify the path to the truststore where client certificates are installed. trustStorePassword If clients are using self-signed certificates, specify the password of the truststore where client certificates are installed . NOTE. You need to configure the trustStorePath and trustStorePassword properties only if clients are using self-signed certificates. Obtain the Subject Distinguished Names (DNs) from each client certificate so you can create a mapping between each client certificate and a broker user. Export each client certificate from the client's keystore file into a temporary file. For example: Print the contents of the exported certificate: The output is similar to that shown below: The Owner entry is the Subject DN. The format used to enter the Subject DN depends on your platform. The string above could also be represented as; Enable certificate-based authentication for the broker's console. Open the <broker_instance_dir> /etc/login.config configuration file. Add the certificate login module and reference the user and roles properties files. For example: activemq { org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule debug=true org.apache.activemq.jaas.textfiledn.user="artemis-users.properties" org.apache.activemq.jaas.textfiledn.role="artemis-roles.properties"; }; org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule The implementation class. org.apache.activemq.jaas.textfiledn.user Specifies the location of the user properties file relative to the directory that contains the login configuration file. org.apache.activemq.jaas.textfiledn.role Specifies the properties file that maps users to defined roles for the login module implementation. Note If you change the default name of the certificate login module configuration in the <broker_instance_dir> /etc/login.config file, you must update the value of the -dhawtio.realm argument in the <broker_instance_dir>/etc/artemis.profile file to match the new name. The default name is activemq . Open the <broker_instance_dir>/etc/artemis-users.properties file. Create a mapping between client certificates and broker users by adding the Subject DNS that you obtained from each client certificate to a broker user. For example: user1=CN=user1,O=Progress,C=US user2=CN=user2,O=Progress,C=US In this example, the user1 broker user is mapped to the client certificate that has a Subject Distinguished Name of CN=user1,O=Progress,C=US Subject DN. After you create a mapping between a client certificate and a broker user, the broker can authenticate the user by using the certificate. Open the <broker_instance_dir>/etc/artemis-roles.properties file. Grant users permission to log in to the console by adding them to the role that is specified for the HAWTIO_ROLE variable in the <broker_instance_dir>/etc/artemis.profile file. The default value of the HAWTIO_ROLE variable is amq . For example: amq=user1, user2 Configure the following recommended security properties for the HTTPS protocol. Open the <broker_instance_dir>/etc/artemis.profile file. Set the hawtio.http.strictTransportSecurity property to allow only HTTPS requests to the AMQ Management Console and to convert any HTTP requests to HTTPS. For example: hawtio.http.strictTransportSecurity = max-age=31536000; includeSubDomains; preload Set the hawtio.http.publicKeyPins property to instruct the web browser to associate a specific cryptographic public key with the AMQ Management Console to decrease the risk of "man-in-the-middle" attacks using forged certificates. For example: hawtio.http.publicKeyPins = pin-sha256="..."; max-age=5184000; includeSubDomains 4.5. Managing brokers using AMQ Management Console You can use AMQ Management Console to view information about a running broker and manage the following resources: Incoming network connections (acceptors) Addresses Queues 4.5.1. Viewing details about the broker To see how the broker is configured, in the left menu, click Artemis . In the folder tree, the local broker is selected by default. In the main pane, the following tabs are available: Status Displays information about the current status of the broker, such as uptime and cluster information. Also displays the amount of address memory that the broker is currently using. The graph shows this value as a proportion of the global-max-size configuration parameter. Figure 4.2. Status tab Connections Displays information about broker connections, including client, cluster, and bridge connections. Sessions Displays information about all sessions currently open on the broker. Consumers Displays information about all consumers currently open on the broker. Producers Displays information about producers currently open on the broker. Addresses Displays information about addresses on the broker. This includes internal addresses, such as store-and-forward addresses. Queues Displays information about queues on the broker. This includes internal queues, such as store-and-forward queues. Attributes Displays detailed information about attributes configured on the broker. Operations Displays JMX operations that you can execute on the broker from the console. When you click an operation, a dialog box opens that enables you to specify parameter values for the operation. Chart Displays real-time data for attributes configured on the broker. You can edit the chart to specify the attributes that are included in the chart. Broker diagram Displays a diagram of the cluster topology. This includes all brokers in the cluster and any addresses and queues on the local broker. 4.5.2. Viewing the broker diagram You can view a diagram of all AMQ Broker resources in your topology, including brokers (live and backup brokers), producers and consumers, addresses, and queues. Procedure In the left menu, click Artemis . In the main pane, click the Broker diagram tab. The console displays a diagram of the cluster topology. This includes all brokers in the cluster and any addresses and queues on the local broker, as shown in the figure. Figure 4.3. Broker diagram tab To change what items are displayed on the diagram, use the check boxes at the top of the diagram. Click Refresh . To show attributes for the local broker or an address or queue that is connected to it, click that node in the diagram. For example, the following figure shows a diagram that also includes attributes for the local broker. Figure 4.4. Broker diagram tab, including attributes 4.5.3. Viewing acceptors You can view details about the acceptors configured for the broker. Procedure In the left menu, click Artemis . In the folder tree, click acceptors . To view details about how an acceptor is configured, click the acceptor. The console shows the corresponding attributes on the Attributes tab, as shown in the figure. Figure 4.5. AMQP acceptor attributes To see complete details for an attribute, click the attribute. An additional window opens to show the details. 4.5.4. Managing addresses and queues An address represents a messaging endpoint. Within the configuration, a typical address is given a unique name. A queue is associated with an address. There can be multiple queues per address. Once an incoming message is matched to an address, the message is sent on to one or more of its queues, depending on the routing type configured. Queues can be configured to be automatically created and deleted. 4.5.4.1. Creating addresses A typical address is given a unique name, zero or more queues, and a routing type. A routing type determines how messages are sent to the queues associated with an address. Addresses can be configured with two different routing types. If you want your messages routed to... Use this routing type... A single queue within the matching address, in a point-to-point manner. Anycast Every queue within the matching address, in a publish-subscribe manner. Multicast You can create and configure addresses and queues, and then delete them when they are no longer in use. Procedure In the left menu, click Artemis . In the folder tree, click addresses . In the main pane, click the Create address tab. A page appears for you to create an address, as shown in the figure. Figure 4.6. Create Address page Complete the following fields: Address name The routing name of the address. Routing type Select one of the following options: Multicast : Messages sent to the address will be distributed to all subscribers in a publish-subscribe manner. Anycast : Messages sent to this address will be distributed to only one subscriber in a point-to-point manner. Both : Enables you to define more than one routing type per address. This typically results in an anti-pattern and is not recommended. Note If an address does use both routing types, and the client does not show a preference for either one, the broker defaults to the anycast routing type. The one exception is when the client uses the MQTT protocol. In that case, the default routing type is multicast . Click Create Address . 4.5.4.2. Sending messages to an address The following procedure shows how to use the console to send a message to an address. Procedure In the left menu, click Artemis . In the folder tree, select an address. On the navigation bar in the main pane, click More Send message . A page appears for you to create a message, as shown in the figure. Figure 4.7. Send Message page If necessary, click the Add Header button to add message header information. Enter the message body. In the Format drop-down menu, select an option for the format of the message body, and then click Format . The message body is formatted in a human-readable style for the format you selected. Click Send message . The message is sent. To send additional messages, change any of the information you entered, and then click Send message . 4.5.4.3. Creating queues Queues provide a channel between a producer and a consumer. Prerequisites The address to which you want to bind the queue must exist. To learn how to use the console to create an address, see Section 4.5.4.1, "Creating addresses" . Procedure In the left menu, click Artemis . In the folder tree, select the address to which you want to bind the queue. In the main pane, click the Create queue tab. A page appears for you to create a queue, as shown in the figure. Figure 4.8. Create Queue page Complete the following fields: Queue name A unique name for the queue. Routing type Select one of the following options: Multicast : Messages sent to the parent address will be distributed to all queues bound to the address. Anycast : Only one queue bound to the parent address will receive a copy of the message. Messages will be distributed evenly among all of the queues bound to the address. Durable If you select this option, the queue and its messages will be persistent. Filter The username to be used when connecting to the broker. Max Consumers The maximum number of consumers that can access the queue at a given time. Purge when no consumers If selected, the queue will be purged when no consumers are connected. Click Create Queue . 4.5.4.4. Checking the status of a queue Charts provide a real-time view of the status of a queue on a broker. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. In the main pane, click the Chart tab. The console displays a chart that shows real-time data for all of the queue attributes. Figure 4.9. Chart tab for a queue Note To view a chart for multiple queues on an address, select the anycast or multicast folder that contains the queues. If necessary, select different criteria for the chart: In the main pane, click Edit . On the Attributes list, select one or more attributes that you want to include in the chart. To select multiple attributes, press and hold the Ctrl key and select each attribute. Click the View Chart button. The chart is updated based on the attributes that you selected. 4.5.4.5. Browsing queues Browsing a queue displays all of the messages in the queue. You can also filter and sort the list to find specific messages. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. Queues are located within the addresses to which they are bound. On the navigation bar in the main pane, click More Browse queue . The messages in the queue are displayed. By default, the first 200 messages are displayed. Figure 4.10. Browse Queue page To browse for a specific message or group of messages, do one of the following: To... Do this... Filter the list of messages In the Filter... text field, enter filter criteria. Click the search (that is, magnifying glass) icon. Sort the list of messages In the list of messages, click a column header. To sort the messages in descending order, click the header a second time. To view the content of a message, click the Show button. You can view the message header, properties, and body. 4.5.4.6. Sending messages to a queue After creating a queue, you can send a message to it. The following procedure outlines the steps required to send a message to an existing queue. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. In the main pane, click the Send message tab. A page appears for you to compose the message. Figure 4.11. Send Message page for a queue If necessary, click the Add Header button to add message header information. Enter the message body. In the Format drop-down menu, select an option for the format of the message body, and then click Format . The message body is formatted in a human-readable style for the format you selected. Click Send message . The message is sent. To send additional messages, change any of the information you entered, and click Send message . 4.5.4.7. Resending messages to a queue You can resend previously sent messages. Procedure Browse for the message you want to resend . Click the check box to the message that you want to resend. Click the Resend button. The message is displayed. Update the message header and body as needed, and then click Send message . 4.5.4.8. Moving messages to a different queue You can move one or more messages in a queue to a different queue. Procedure Browse for the messages you want to move . Click the check box to each message that you want to move. In the navigation bar, click Move Messages . A confirmation dialog box appears. From the drop-down menu, select the name of the queue to which you want to move the messages. Click Move . 4.5.4.9. Deleting messages or queues You can delete a queue or purge all of the messages from a queue. Procedure Browse for the queue you want to delete or purge . Do one of the following: To... Do this... Delete a message from the queue Click the check box to each message that you want to delete. Click the Delete button. Purge all messages from the queue On the navigation bar in the main pane, click Delete queue . Click the Purge Queue button. Delete the queue On the navigation bar in the main pane, click Delete queue . Click the Delete Queue button.
[ "<web path=\"web\"> <binding uri=\"http://localhost:8161\"> <app url=\"redhat-branding\" war=\"redhat-branding.war\"/> <app url=\"artemis-plugin\" war=\"artemis-plugin.war\"/> <app url=\"dispatch-hawtio-console\" war=\"dispatch-hawtio-console.war\"/> <app url=\"console\" war=\"console.war\"/> </binding> </web>", "<web path=\"web\"> <binding uri=\"http://0.0.0.0:8161\">", "<cors> <allow-origin>*://localhost*</allow-origin> <allow-origin>*://192.168.0.49*</allow-origin> <allow-origin>*://192.168.0.51*</allow-origin> <!-- Check for the proper origin on the server side, too --> <strict-checking/> </cors>", "-Dhawtio.disableProxy=false", "-Dhawtio.proxyWhitelist=192.168.0.51", "http://192.168.0.49/console/jolokia", "https://broker.example.com:8161/console/*", "console { org.keycloak.adapters.jaas.BearerTokenLoginModule required keycloak-config-file=\"USD{artemis.instance}/etc/keycloak-bearer-token.json\" role-principal-class=org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal ; };", "{ \"realm\": \" <realm-name> \", \"resource\": \" <client-name> \", \"auth-server-url\": \" <RHSSO-URL> /auth\", \"principal-attribute\": \"preferred_username\", \"use-resource-role-mappings\": true, \"ssl-required\": \"external\", \"confidential-port\": 0 }", "{ \"realm\": \"<realm-name>\", \"clientId\": \"<client-name>\", \"url\": \" <RHSSO-URL> /auth\" }", "<security-setting match=\"Info\"> <permission roles=\"amq\" type=\"createDurableQueue\"/> <permission roles=\"amq\" type=\"deleteDurableQueue\"/> <permission roles=\"amq\" type=\"createNonDurableQueue\"/> <permission roles=\"amq\" type=\"deleteNonDurableQueue\"/> <permission roles=\"guest\" type=\"send\"/> <permission roles=\"amq\" type=\"consume\"/> </security-setting>", "<web path=\"web\"> <binding uri=\"https://0.0.0.0:8161\" keyStorePath=\"<path_to_keystore>\" keyStorePassword=\"<password>\" clientAuth=\"<true/false>\" trustStorePath=\"<path_to_truststore>\" trustStorePassword=\"<password>\"> </binding> </web>", "keyStorePath=\" <broker_instance_dir> /etc/keystore.jks\"", "keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg \"RSA\" -keysize 2048 -dname \"CN=ActiveMQ Broker, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ\" -ext bc=ca:false -ext eku=cA", "keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -alias client -exportcert -rfc > broker.crt", "keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file broker.crt -noprompt", "keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg \"RSA\" -keysize 2048 -dname \"CN=ActiveMQ Client, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ\" -ext bc=ca:false -ext eku=cA", "keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -alias client -exportcert -rfc > client.crt", "keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file client.crt -noprompt", "<web path=\"web\"> <binding uri=\"https://localhost:8161\" keyStorePath=\"USD{artemis.instance}/etc/server-keystore.p12\" keyStorePassword=\"password\" clientAuth=\"true\" trustStorePath=\"USD{artemis.instance}/etc/client-truststore.p12\" trustStorePassword=\"password\"> </binding> </web>", "keytool -export -file <file_name> -alias broker-localhost -keystore broker.ks -storepass <password>", "keytool -printcert -file <file_name>", "Owner: CN=AMQ Client, OU=Artemis, O=AMQ, L=AMQ, ST=AMQ, C=AMQ Issuer: CN=AMQ Client, OU=Artemis, O=AMQ, L=AMQ, ST=AMQ, C=AMQ Serial number: 51461f5d Valid from: Sun Apr 17 12:20:14 IST 2022 until: Sat Jul 16 12:20:14 IST 2022 Certificate fingerprints: SHA1: EC:94:13:16:04:93:57:4F:FD:CA:AD:D8:32:68:A4:13:CC:EA:7A:67 SHA256: 85:7F:D5:4A:69:80:3B:5B:86:27:99:A7:97:B8:E4:E8:7D:6F:D1:53:08:D8:7A:BA:A7:0A:7A:96:F3:6B:98:81", "Owner: `CN=localhost,\\ OU=broker,\\ O=Unknown,\\ L=Unknown,\\ ST=Unknown,\\ C=Unknown`", "activemq { org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule debug=true org.apache.activemq.jaas.textfiledn.user=\"artemis-users.properties\" org.apache.activemq.jaas.textfiledn.role=\"artemis-roles.properties\"; };", "user1=CN=user1,O=Progress,C=US user2=CN=user2,O=Progress,C=US", "amq=user1, user2", "hawtio.http.strictTransportSecurity = max-age=31536000; includeSubDomains; preload", "hawtio.http.publicKeyPins = pin-sha256=\"...\"; max-age=5184000; includeSubDomains" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/managing_amq_broker/assembly-using-amq-console-managing
Chapter 12. Downloading the Red Hat Process Automation Manager installation files
Chapter 12. Downloading the Red Hat Process Automation Manager installation files You can use the installer JAR file or deployable ZIP files to install Red Hat Process Automation Manager. You can run the installer in interactive or command line interface (CLI) mode. Alternatively, you can extract and configure the Business Central and KIE Server deployable ZIP files. If you want to run Business Central without deploying it to an application server, download the Business Central Standalone JAR file. Download a Red Hat Process Automation Manager distribution that meets your environment and installation requirements. Note Red Hat Decision Manager is a subset of Red Hat Process Automation Manager. You must install Red Hat Process Automation Manager in order to use Red Hat Decision Manager. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download one of the following product distributions, depending on your preferred installation method: Note You only need to download one of these distributions. If you want to use the installer to install Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4, download Red Hat Process Automation Manager 7.13.5 Installer ( rhpam-installer-7.13.5.jar ). The installer graphical user interface guides you through the installation process. If you want to install Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 using the deployable ZIP files, download the following files: Red Hat Process Automation Manager 7.13.5 KIE Server for All Supported EE8 Containers ( rhpam-7.13.5-kie-server-ee8.zip ) Red Hat Process Automation Manager 7.13.5 KIE Server Deployable for EAP 7 ( rhpam-7.13.5-business-central-eap7-deployable.zip ) To run Business Central without needing to deploy it to an application server, download Red Hat Process Automation Manager 7.13.5 Business Central Standalone ( rhpam-7.13.5-business-central-standalone.jar ).
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/install-download-proc_install-on-eap
A. Revision History
A. Revision History Revision History Revision 6.6.0-2 Thu Sep 8 2016 Christian Huffman Included list of modified packages between 6.6.0 and 6.6.1 Revision 6.6.0-1 Wed Sep 7 2016 Christian Huffman Updating for 6.6.1. Included component APIs. Revision 6.6.0-0 Tue Jan 5 2016 Christian Huffman Initial draft for 6.6.0.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/api_documentation/appe-api_documentation-revision_history
Chapter 124. Simple
Chapter 124. Simple The Simple Expression Language was a really simple language when it was created, but has since grown more powerful. It is primarily intended for being a very small and simple language for evaluating Expression or Predicate without requiring any new dependencies or knowledge of other scripting languages such as Groovy. The simple language is designed with intend to cover almost all the common use cases when little need for scripting in your Camel routes. However, for much more complex use cases then a more powerful language is recommended such as: Groovy MVEL OGNL Note The simple language requires camel-bean JAR as classpath dependency if the simple language uses OGNL expressions, such as calling a method named myMethod on the message body: USD{body.myMethod()} . At runtime the simple language will then us its built-in OGNL support which requires the camel-bean component. The simple language uses USD{body} placeholders for complex expressions or functions. Note See also the CSimple language which is compiled. Note Alternative syntax You can also use the alternative syntax which uses USDsimple{ } as placeholders. This can be used in situations to avoid clashes when using for example Spring property placeholder together with Camel. 124.1. Dependencies When using simple with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency> 124.2. Simple Language options The Simple language supports 2 options, which are listed below. Name Default Java Type Description resultType String Sets the class name of the result type (type from output). trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 124.3. Variables Variable Type Description camelId String the CamelContext name camelContext. OGNL Object the CamelContext invoked using a Camel OGNL expression. exchange Exchange the Exchange exchange. OGNL Object the Exchange invoked using a Camel OGNL expression. exchangeId String the exchange id id String the message id messageTimestamp String the message timestamp (millis since epoc) that this message originates from. Some systems like JMS, Kafka, AWS have a timestamp on the event/message, that Camel received. This method returns the timestamp, if a timestamp exists. The message timestamp and exchange created are not the same. An exchange always have a created timestamp which is the local timestamp when Camel created the exchange. The message timestamp is only available in some Camel components when the consumer is able to extract the timestamp from the source event. If the message has no timestamp then 0 is returned. body Object the body body. OGNL Object the body invoked using a Camel OGNL expression. bodyAs( type ) Type Converts the body to the given type determined by its classname. The converted body can be null. bodyAs( type ). OGNL Object Converts the body to the given type determined by its classname and then invoke methods using a Camel OGNL expression. The converted body can be null. bodyOneLine String Converts the body to a String and removes all line-breaks so the string is in one line. mandatoryBodyAs( type ) Type Converts the body to the given type determined by its classname, and expects the body to be not null. mandatoryBodyAs( type ). OGNL Object Converts the body to the given type determined by its classname and then invoke methods using a Camel OGNL expression. header.foo Object refer to the foo header header[foo] Object refer to the foo header headers.foo Object refer to the foo header headers:foo Object refer to the foo header headers[foo] Object refer to the foo header header.foo[bar] Object regard foo header as a map and perform lookup on the map with bar as key header.foo. OGNL Object refer to the foo header and invoke its value using a Camel OGNL expression. headerAs( key , type ) Type converts the header to the given type determined by its classname headers Map refer to the headers exchangeProperty.foo Object refer to the foo property on the exchange exchangeProperty[foo] Object refer to the foo property on the exchange exchangeProperty.foo. OGNL Object refer to the foo property on the exchange and invoke its value using a Camel OGNL expression. sys.foo String refer to the JVM system property sysenv.foo String refer to the system environment variable env.foo String refer to the system environment variable exception Object refer to the exception object on the exchange, is null if no exception set on exchange. Will fallback and grab caught exceptions ( Exchange.EXCEPTION_CAUGHT ) if the Exchange has any. exception. OGNL Object refer to the exchange exception invoked using a Camel OGNL expression object exception.message String refer to the exception.message on the exchange, is null if no exception set on exchange. Will fallback and grab caught exceptions ( Exchange.EXCEPTION_CAUGHT ) if the Exchange has any. exception.stacktrace String refer to the exception.stracktrace on the exchange, is null if no exception set on exchange. Will fallback and grab caught exceptions ( Exchange.EXCEPTION_CAUGHT ) if the Exchange has any. date:_command_ Date evaluates to a Date object. Supported commands are: now for current timestamp, exchangeCreated for the timestamp when the current exchange was created, header.xxx to use the Long/Date object header with the key xxx. exchangeProperty.xxx to use the Long/Date object in the exchange property with the key xxx. file for the last modified timestamp of the file (available with a File consumer). Command accepts offsets such as: now-24h or header.xxx+1h or even now+1h30m-100 . date:_command:pattern_ String Date formatting using java.text.SimpleDateFormat patterns. date-with-timezone:_command:timezone:pattern_ String Date formatting using java.text.SimpleDateFormat timezones and patterns. bean:_bean expression_ Object Invoking a bean expression using the language. Specifying a method name you must use dot as separator. We also support the ?method=methodname syntax that is used by the component. Camel will by default lookup a bean by the given name. However if you need to refer to a bean class (such as calling a static method) then you can prefix with type, such as bean:type:fqnClassName . properties:key:default String Lookup a property with the given key. If the key does not exists or has no value, then an optional default value can be specified. routeId String Returns the id of the current route the Exchange is being routed. stepId String Returns the id of the current step the Exchange is being routed. threadName String Returns the name of the current thread. Can be used for logging purpose. hostname String Returns the local hostname (may be empty if not possible to resolve). ref:xxx Object To lookup a bean from the Registry with the given id. type:name.field Object To refer to a type or field by its FQN name. To refer to a field you can append .FIELD_NAME. For example, you can refer to the constant field from Exchange as: org.apache.camel.Exchange.FILE_NAME null null represents a null random(value) Integer returns a random Integer between 0 (included) and value (excluded) random(min,max) Integer returns a random Integer between min (included) and max (excluded) collate(group) List The collate function iterates the message body and groups the data into sub lists of specified size. This can be used with the Splitter EIP to split a message body and group/batch the splitted sub message into a group of N sub lists. This method works similar to the collate method in Groovy. skip(number) Iterator The skip function iterates the message body and skips the first number of items. This can be used with the Splitter EIP to split a message body and skip the first N number of items. messageHistory String The message history of the current exchange how it has been routed. This is similar to the route stack-trace message history the error handler logs in case of an unhandled exception. messageHistory(false) String As messageHistory but without the exchange details (only includes the route stack-trace). This can be used if you do not want to log sensitive data from the message itself. 124.4. OGNL expression support When using OGNL then camel-bean JAR is required to be on the classpath. Camel's OGNL support is for invoking methods only. You cannot access fields. Camel support accessing the length field of Java arrays. The Simple and Bean language now supports a Camel OGNL notation for invoking beans in a chain like fashion. Suppose the Message IN body contains a POJO which has a getAddress() method. Then you can use Camel OGNL notation to access the address object: simple("USD{body.address}") simple("USD{body.address.street}") simple("USD{body.address.zip}") Camel understands the shorthand names for getters, but you can invoke any method or use the real name such as: simple("USD{body.address}") simple("USD{body.getAddress.getStreet}") simple("USD{body.address.getZip}") simple("USD{body.doSomething}") You can also use the null safe operator ( ?. ) to avoid NPE if for example the body does NOT have an address simple("USD{body?.address?.street}") It is also possible to index in Map or List types, so you can do: simple("USD{body[foo].name}") To assume the body is Map based and lookup the value with foo as key, and invoke the getName method on that value. If the key has space, then you must enclose the key with quotes, for example 'foo bar': simple("USD{body['foo bar'].name}") You can access the Map or List objects directly using their key name (with or without dots) : simple("USD{body[foo]}") simple("USD{body[this.is.foo]}") Suppose there was no value with the key foo then you can use the null safe operator to avoid the NPE as shown: simple("USD{body[foo]?.name}") You can also access List types, for example to get lines from the address you can do: simple("USD{body.address.lines[0]}") simple("USD{body.address.lines[1]}") simple("USD{body.address.lines[2]}") There is a special last keyword which can be used to get the last value from a list. simple("USD{body.address.lines[last]}") And to get the 2nd last you can subtract a number, so we can use last-1 to indicate this: simple("USD{body.address.lines[last-1]}") And the 3rd last is of course: simple("USD{body.address.lines[last-2]}") And you can call the size method on the list with simple("USD{body.address.lines.size}") Camel supports the length field for Java arrays as well, eg: String[] lines = new String[]{"foo", "bar", "cat"}; exchange.getIn().setBody(lines); simple("There are USD{body.length} lines") And yes you can combine this with the operator support as shown below: simple("USD{body.address.zip} > 1000") 124.5. Operator support The parser is limited to only support a single operator. To enable it the left value must be enclosed in USD\\{ }. The syntax is: USD{leftValue} OP rightValue Where the rightValue can be a String literal enclosed in ' ' , null , a constant value or another expression enclosed in USD\{ } . Note There must be spaces around the operator. Camel will automatically type convert the rightValue type to the leftValue type, so it is able to eg. convert a string into a numeric, so you can use > comparison for numeric values. The following operators are supported: Operator Description == equals =~ equals ignore case (will ignore case when comparing String values) > greater than >= greater than or equals < less than ⇐ less than or equals != not equals !=~ not equals ignore case (will ignore case when comparing String values) contains For testing if contains in a string based value !contains For testing if not contains in a string based value ~~ For testing if contains by ignoring case sensitivity in a string based value !~~ For testing if not contains by ignoring case sensitivity in a string based value regex For matching against a given regular expression pattern defined as a String value !regex For not matching against a given regular expression pattern defined as a String value in For matching if in a set of values, each element must be separated by comma. If you want to include an empty value, then it must be defined using double comma, eg ',,bronze,silver,gold', which is a set of four values with an empty value and then the three medals. !in For matching if not in a set of values, each element must be separated by comma. If you want to include an empty value, then it must be defined using double comma, eg ',,bronze,silver,gold', which is a set of four values with an empty value and then the three medals. is For matching if the left hand side type is an instance of the value. !is For matching if the left hand side type is not an instance of the value. range For matching if the left hand side is within a range of values defined as numbers: from..to .. !range For matching if the left hand side is not within a range of values defined as numbers: from..to . . startsWith For testing if the left hand side string starts with the right hand string. starts with Same as the startsWith operator. endsWith For testing if the left hand side string ends with the right hand string. ends with Same as the endsWith operator. And the following unary operators can be used: Operator Description ++ To increment a number by one. The left hand side must be a function, otherwise parsed as literal. - To decrement a number by one. The left hand side must be a function, otherwise parsed as literal. \n To use newline character. \t To use tab character. \r To use carriage return character. \} To use the } character as text. This may be needed when building a JSon structure with the simple language. And the following logical operators can be used to group expressions: Operator Description && The logical and operator is used to group two expressions. The logical or operator is used to group two expressions. The syntax for AND is: USD{leftValue} OP rightValue && USD{leftValue} OP rightValue And the syntax for OR is: USD{leftValue} OP rightValue || USD{leftValue} OP rightValue Some examples: // exact equals match simple("USD{header.foo} == 'foo'") // ignore case when comparing, so if the header has value FOO this will match simple("USD{header.foo} =~ 'foo'") // here Camel will type convert '100' into the type of header.bar and if it is an Integer '100' will also be converter to an Integer simple("USD{header.bar} == '100'") simple("USD{header.bar} == 100") // 100 will be converter to the type of header.bar so we can do > comparison simple("USD{header.bar} > 100") 124.5.1. Comparing with different types When you compare with different types such as String and int, then you have to take a bit care. Camel will use the type from the left hand side as 1st priority. And fallback to the right hand side type if both values couldn't be compared based on that type. This means you can flip the values to enforce a specific type. Suppose the bar value above is a String. Then you can flip the equation: simple("100 < USD{header.bar}") which then ensures the int type is used as 1st priority. This may change in the future if the Camel team improves the binary comparison operations to prefer numeric types to String based. It's most often the String type which causes problem when comparing with numbers. // testing for null simple("USD{header.baz} == null") // testing for not null simple("USD{header.baz} != null") And a bit more advanced example where the right value is another expression simple("USD{header.date} == USD{date:now:yyyyMMdd}") simple("USD{header.type} == USD{bean:orderService?method=getOrderType}") And an example with contains, testing if the title contains the word Camel simple("USD{header.title} contains 'Camel'") And an example with regex, testing if the number header is a 4 digit value: simple("USD{header.number} regex '\\d{4}'") And finally an example if the header equals any of the values in the list. Each element must be separated by comma, and no space around. This also works for numbers etc, as Camel will convert each element into the type of the left hand side. simple("USD{header.type} in 'gold,silver'") And for all the last 3 we also support the negate test using not: simple("USD{header.type} !in 'gold,silver'") And you can test if the type is a certain instance, eg for instance a String simple("USD{header.type} is 'java.lang.String'") We have added a shorthand for all java.lang types so you can write it as: simple("USD{header.type} is 'String'") Ranges are also supported. The range interval requires numbers and both from and end are inclusive. For instance to test whether a value is between 100 and 199: simple("USD{header.number} range 100..199") Notice we use .. in the range without spaces. It is based on the same syntax as Groovy. simple("USD{header.number} range '100..199'") As the XML DSL does not have all the power as the Java DSL with all its various builder methods, you have to resort to use some other languages for testing with simple operators. Now you can do this with the simple language. In the sample below we want to test if the header is a widget order: <from uri="seda:orders"> <filter> <simple>USD{header.type} == 'widget'</simple> <to uri="bean:orderService?method=handleWidget"/> </filter> </from> 124.5.2. Using and / or If you have two expressions you can combine them with the && or || operator. For instance: simple("USD{header.title} contains 'Camel' && USD{header.type'} == 'gold'") And of course the || is also supported. The sample would be: simple("USD{header.title} contains 'Camel' || USD{header.type'} == 'gold'") 124.6. Examples In the XML DSL sample below we filter based on a header value: <from uri="seda:orders"> <filter> <simple>USD{header.foo}</simple> <to uri="mock:fooOrders"/> </filter> </from> The Simple language can be used for the predicate test above in the Message Filter pattern, where we test if the in message has a foo header (a header with the key foo exists). If the expression evaluates to true then the message is routed to the mock:fooOrders endpoint, otherwise the message is dropped. The same example in Java DSL: from("seda:orders") .filter().simple("USD{header.foo}") .to("seda:fooOrders"); You can also use the simple language for simple text concatenations such as: from("direct:hello") .transform().simple("Hello USD{header.user} how are you?") .to("mock:reply"); Notice that we must use USD\\{ } placeholders in the expression now to allow Camel to parse it correctly. And this sample uses the date command to output current date. from("direct:hello") .transform().simple("The today is USD{date:now:yyyyMMdd} and it is a great day.") .to("mock:reply"); And in the sample below we invoke the bean language to invoke a method on a bean to be included in the returned string: from("direct:order") .transform().simple("OrderId: USD{bean:orderIdGenerator}") .to("mock:reply"); Where orderIdGenerator is the id of the bean registered in the Registry. If using Spring then it is the Spring bean id. If we want to declare which method to invoke on the order id generator bean we must prepend .method name such as below where we invoke the generateId method. from("direct:order") .transform().simple("OrderId: USD{bean:orderIdGenerator.generateId}") .to("mock:reply"); We can use the ?method=methodname option that we are familiar with the Bean component itself: from("direct:order") .transform().simple("OrderId: USD{bean:orderIdGenerator?method=generateId}") .to("mock:reply"); You can also convert the body to a given type, for example to ensure that it is a String you can do: <transform> <simple>Hello USD{bodyAs(String)} how are you?</simple> </transform> There are a few types which have a shorthand notation, so we can use String instead of java.lang.String . These are: byte[], String, Integer, Long . All other types must use their FQN name, e.g. org.w3c.dom.Document . It is also possible to lookup a value from a header Map : <transform> <simple>The gold value is USD{header.type[gold]}</simple> </transform> In the code above we lookup the header with name type and regard it as a java.util.Map and we then lookup with the key gold and return the value. If the header is not convertible to Map an exception is thrown. If the header with name type does not exist null is returned. You can nest functions, such as shown below: <setHeader name="myHeader"> <simple>USD{properties:USD{header.someKey}}</simple> </setHeader> 124.7. Setting result type You can now provide a result type to the Simple expression, which means the result of the evaluation will be converted to the desired type. This is most usable to define types such as booleans, integers, etc. For example to set a header as a boolean type you can do: .setHeader("cool", simple("true", Boolean.class)) And in XML DSL <setHeader name="cool"> <!-- use resultType to indicate that the type should be a java.lang.Boolean --> <simple resultType="java.lang.Boolean">true</simple> </setHeader> 124.8. Using new lines or tabs in XML DSLs It is easier to specify new lines or tabs in XML DSLs as you can escape the value now <transform> <simple>The following text\nis on a new line</simple> </transform> 124.9. Leading and trailing whitespace handling The trim attribute of the expression can be used to control whether the leading and trailing whitespace characters are removed or preserved. The default value is true, which removes the whitespace characters. <setBody> <simple trim="false">You get some trailing whitespace characters. </simple> </setBody> 124.10. Loading script from external resource You can externalize the script and have Camel load it from a resource such as "classpath:" , "file:" , or "http:" . This is done using the following syntax: "resource:scheme:location" , e.g. to refer to a file on the classpath you can do: .setHeader("myHeader").simple("resource:classpath:mysimple.txt") 124.11. Spring Boot Auto-Configuration The component supports 147 options, which are listed below. Name Description Default Type camel.cloud.consul.service-discovery.acl-token Sets the ACL token to be used with Consul. String camel.cloud.consul.service-discovery.block-seconds The seconds to wait for a watch event, default 10 seconds. 10 Integer camel.cloud.consul.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.consul.service-discovery.connect-timeout-millis Connect timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.datacenter The data center. String camel.cloud.consul.service-discovery.enabled Enable the component. true Boolean camel.cloud.consul.service-discovery.password Sets the password to be used for basic authentication. String camel.cloud.consul.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.consul.service-discovery.read-timeout-millis Read timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.url The Consul agent URL. String camel.cloud.consul.service-discovery.user-name Sets the username to be used for basic authentication. String camel.cloud.consul.service-discovery.write-timeout-millis Write timeout for OkHttpClient. Long camel.cloud.dns.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.dns.service-discovery.domain The domain name;. String camel.cloud.dns.service-discovery.enabled Enable the component. true Boolean camel.cloud.dns.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.dns.service-discovery.proto The transport protocol of the desired service. _tcp String camel.cloud.etcd.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.etcd.service-discovery.enabled Enable the component. true Boolean camel.cloud.etcd.service-discovery.password The password to use for basic authentication. String camel.cloud.etcd.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.etcd.service-discovery.service-path The path to look for for service discovery. /services/ String camel.cloud.etcd.service-discovery.timeout To set the maximum time an action could take to complete. Long camel.cloud.etcd.service-discovery.type To set the discovery type, valid values are on-demand and watch. on-demand String camel.cloud.etcd.service-discovery.uris The URIs the client can connect to. String camel.cloud.etcd.service-discovery.user-name The user name to use for basic authentication. String camel.cloud.kubernetes.service-discovery.api-version Sets the API version when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-data Sets the Certificate Authority data when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-file Sets the Certificate Authority data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-data Sets the Client Certificate data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-file Sets the Client Certificate data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-algo Sets the Client Keystore algorithm, such as RSA when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-data Sets the Client Keystore data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-file Sets the Client Keystore data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-passphrase Sets the Client Keystore passphrase when using client lookup. String camel.cloud.kubernetes.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.kubernetes.service-discovery.dns-domain Sets the DNS domain to use for DNS lookup. String camel.cloud.kubernetes.service-discovery.enabled Enable the component. true Boolean camel.cloud.kubernetes.service-discovery.lookup How to perform service lookup. Possible values: client, dns, environment. When using client, then the client queries the kubernetes master to obtain a list of active pods that provides the service, and then random (or round robin) select a pod. When using dns the service name is resolved as name.namespace.svc.dnsDomain. When using dnssrv the service name is resolved with SRV query for . ... svc... When using environment then environment variables are used to lookup the service. By default environment is used. environment String camel.cloud.kubernetes.service-discovery.master-url Sets the URL to the master when using client lookup. String camel.cloud.kubernetes.service-discovery.namespace Sets the namespace to use. Will by default use namespace from the ENV variable KUBERNETES_MASTER. String camel.cloud.kubernetes.service-discovery.oauth-token Sets the OAUTH token for authentication (instead of username/password) when using client lookup. String camel.cloud.kubernetes.service-discovery.password Sets the password for authentication when using client lookup. String camel.cloud.kubernetes.service-discovery.port-name Sets the Port Name to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.port-protocol Sets the Port Protocol to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.kubernetes.service-discovery.trust-certs Sets whether to turn on trust certificate check when using client lookup. false Boolean camel.cloud.kubernetes.service-discovery.username Sets the username for authentication when using client lookup. String camel.cloud.ribbon.load-balancer.client-name Sets the Ribbon client name. String camel.cloud.ribbon.load-balancer.configurations Define additional configuration definitions. Map camel.cloud.ribbon.load-balancer.enabled Enable the component. true Boolean camel.cloud.ribbon.load-balancer.namespace The namespace. String camel.cloud.ribbon.load-balancer.password The password. String camel.cloud.ribbon.load-balancer.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.ribbon.load-balancer.username The username. String camel.hystrix.allow-maximum-size-to-diverge-from-core-size Allows the configuration for maximumSize to take effect. That value can then be equal to, or higher, than coreSize. false Boolean camel.hystrix.circuit-breaker-enabled Whether to use a HystrixCircuitBreaker or not. If false no circuit-breaker logic will be used and all requests permitted. This is similar in effect to circuitBreakerForceClosed() except that continues tracking metrics and knowing whether it should be open/closed, this property results in not even instantiating a circuit-breaker. true Boolean camel.hystrix.circuit-breaker-error-threshold-percentage Error percentage threshold (as whole number such as 50) at which point the circuit breaker will trip open and reject requests. It will stay tripped for the duration defined in circuitBreakerSleepWindowInMilliseconds; The error percentage this is compared against comes from HystrixCommandMetrics.getHealthCounts(). 50 Integer camel.hystrix.circuit-breaker-force-closed If true the HystrixCircuitBreaker#allowRequest() will always return true to allow requests regardless of the error percentage from HystrixCommandMetrics.getHealthCounts(). The circuitBreakerForceOpen() property takes precedence so if it set to true this property does nothing. false Boolean camel.hystrix.circuit-breaker-force-open If true the HystrixCircuitBreaker.allowRequest() will always return false, causing the circuit to be open (tripped) and reject all requests. This property takes precedence over circuitBreakerForceClosed();. false Boolean camel.hystrix.circuit-breaker-request-volume-threshold Minimum number of requests in the metricsRollingStatisticalWindowInMilliseconds() that must exist before the HystrixCircuitBreaker will trip. If below this number the circuit will not trip regardless of error percentage. 20 Integer camel.hystrix.circuit-breaker-sleep-window-in-milliseconds The time in milliseconds after a HystrixCircuitBreaker trips open that it should wait before trying requests again. 5000 Integer camel.hystrix.configurations Define additional configuration definitions. Map camel.hystrix.core-pool-size Core thread-pool size that gets passed to java.util.concurrent.ThreadPoolExecutor#setCorePoolSize(int). 10 Integer camel.hystrix.enabled Enable the component. true Boolean camel.hystrix.execution-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.run(). Requests beyond the concurrent limit will be rejected. Applicable only when executionIsolationStrategy == SEMAPHORE. 20 Integer camel.hystrix.execution-isolation-strategy What isolation strategy HystrixCommand.run() will be executed with. If THREAD then it will be executed on a separate thread and concurrent requests limited by the number of threads in the thread-pool. If SEMAPHORE then it will be executed on the calling thread and concurrent requests limited by the semaphore count. THREAD String camel.hystrix.execution-isolation-thread-interrupt-on-timeout Whether the execution thread should attempt an interrupt (using Future#cancel ) when a thread times out. Applicable only when executionIsolationStrategy() == THREAD. true Boolean camel.hystrix.execution-timeout-enabled Whether the timeout mechanism is enabled for this command. true Boolean camel.hystrix.execution-timeout-in-milliseconds Time in milliseconds at which point the command will timeout and halt execution. If executionIsolationThreadInterruptOnTimeout == true and the command is thread-isolated, the executing thread will be interrupted. If the command is semaphore-isolated and a HystrixObservableCommand, that command will get unsubscribed. 1000 Integer camel.hystrix.fallback-enabled Whether HystrixCommand.getFallback() should be attempted when failure occurs. true Boolean camel.hystrix.fallback-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.getFallback(). Requests beyond the concurrent limit will fail-fast and not attempt retrieving a fallback. 10 Integer camel.hystrix.group-key Sets the group key to use. The default value is CamelHystrix. CamelHystrix String camel.hystrix.keep-alive-time Keep-alive time in minutes that gets passed to ThreadPoolExecutor#setKeepAliveTime(long,TimeUnit). 1 Integer camel.hystrix.max-queue-size Max queue size that gets passed to BlockingQueue in HystrixConcurrencyStrategy.getBlockingQueue(int) This should only affect the instantiation of a threadpool - it is not eliglible to change a queue size on the fly. For that, use queueSizeRejectionThreshold(). -1 Integer camel.hystrix.maximum-size Maximum thread-pool size that gets passed to ThreadPoolExecutor#setMaximumPoolSize(int) . This is the maximum amount of concurrency that can be supported without starting to reject HystrixCommands. Please note that this setting only takes effect if you also set allowMaximumSizeToDivergeFromCoreSize. 10 Integer camel.hystrix.metrics-health-snapshot-interval-in-milliseconds Time in milliseconds to wait between allowing health snapshots to be taken that calculate success and error percentages and affect HystrixCircuitBreaker.isOpen() status. On high-volume circuits the continual calculation of error percentage can become CPU intensive thus this controls how often it is calculated. 500 Integer camel.hystrix.metrics-rolling-percentile-bucket-size Maximum number of values stored in each bucket of the rolling percentile. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-percentile-enabled Whether percentile metrics should be captured using HystrixRollingPercentile inside HystrixCommandMetrics. true Boolean camel.hystrix.metrics-rolling-percentile-window-buckets Number of buckets the rolling percentile window is broken into. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 6 Integer camel.hystrix.metrics-rolling-percentile-window-in-milliseconds Duration of percentile rolling window in milliseconds. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10000 Integer camel.hystrix.metrics-rolling-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-statistical-window-in-milliseconds This property sets the duration of the statistical rolling window, in milliseconds. This is how long metrics are kept for the thread pool. The window is divided into buckets and rolls by those increments. 10000 Integer camel.hystrix.queue-size-rejection-threshold Queue size rejection threshold is an artificial max size at which rejections will occur even if maxQueueSize has not been reached. This is done because the maxQueueSize of a BlockingQueue can not be dynamically changed and we want to support dynamically changing the queue size that affects rejections. This is used by HystrixCommand when queuing a thread for execution. 5 Integer camel.hystrix.request-log-enabled Whether HystrixCommand execution and events should be logged to HystrixRequestLog. true Boolean camel.hystrix.thread-pool-key Sets the thread pool key to use. Will by default use the same value as groupKey has been configured to use. CamelHystrix String camel.hystrix.thread-pool-rolling-number-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10 Integer camel.hystrix.thread-pool-rolling-number-statistical-window-in-milliseconds Duration of statistical rolling window in milliseconds. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10000 Integer camel.language.constant.enabled Whether to enable auto configuration of the constant language. This is enabled by default. Boolean camel.language.constant.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.csimple.enabled Whether to enable auto configuration of the csimple language. This is enabled by default. Boolean camel.language.csimple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.exchangeproperty.enabled Whether to enable auto configuration of the exchangeProperty language. This is enabled by default. Boolean camel.language.exchangeproperty.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.file.enabled Whether to enable auto configuration of the file language. This is enabled by default. Boolean camel.language.file.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.header.enabled Whether to enable auto configuration of the header language. This is enabled by default. Boolean camel.language.header.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.ref.enabled Whether to enable auto configuration of the ref language. This is enabled by default. Boolean camel.language.ref.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.simple.enabled Whether to enable auto configuration of the simple language. This is enabled by default. Boolean camel.language.simple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.tokenize.enabled Whether to enable auto configuration of the tokenize language. This is enabled by default. Boolean camel.language.tokenize.group-delimiter Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. String camel.language.tokenize.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.resilience4j.automatic-transition-from-open-to-half-open-enabled Enables automatic transition from OPEN to HALF_OPEN state once the waitDurationInOpenState has passed. false Boolean camel.resilience4j.circuit-breaker-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreaker instance to lookup and use from the registry. When using this, then any other circuit breaker options are not in use. String camel.resilience4j.config-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreakerConfig instance to lookup and use from the registry. String camel.resilience4j.configurations Define additional configuration definitions. Map camel.resilience4j.enabled Enable the component. true Boolean camel.resilience4j.failure-rate-threshold Configures the failure rate threshold in percentage. If the failure rate is equal or greater than the threshold the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 50 percentage. Float camel.resilience4j.minimum-number-of-calls Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. For example, if minimumNumberOfCalls is 10, then at least 10 calls must be recorded, before the failure rate can be calculated. If only 9 calls have been recorded the CircuitBreaker will not transition to open even if all 9 calls have failed. Default minimumNumberOfCalls is 100. 100 Integer camel.resilience4j.permitted-number-of-calls-in-half-open-state Configures the number of permitted calls when the CircuitBreaker is half open. The size must be greater than 0. Default size is 10. 10 Integer camel.resilience4j.sliding-window-size Configures the size of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. slidingWindowSize configures the size of the sliding window. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. The slidingWindowSize must be greater than 0. The minimumNumberOfCalls must be greater than 0. If the slidingWindowType is COUNT_BASED, the minimumNumberOfCalls cannot be greater than slidingWindowSize . If the slidingWindowType is TIME_BASED, you can pick whatever you want. Default slidingWindowSize is 100. 100 Integer camel.resilience4j.sliding-window-type Configures the type of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. Default slidingWindowType is COUNT_BASED. COUNT_BASED String camel.resilience4j.slow-call-duration-threshold Configures the duration threshold (seconds) above which calls are considered as slow and increase the slow calls percentage. Default value is 60 seconds. 60 Integer camel.resilience4j.slow-call-rate-threshold Configures a threshold in percentage. The CircuitBreaker considers a call as slow when the call duration is greater than slowCallDurationThreshold Duration. When the percentage of slow calls is equal or greater the threshold, the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 100 percentage which means that all recorded calls must be slower than slowCallDurationThreshold. Float camel.resilience4j.wait-duration-in-open-state Configures the wait duration (in seconds) which specifies how long the CircuitBreaker should stay open, before it switches to half open. Default value is 60 seconds. 60 Integer camel.resilience4j.writable-stack-trace-enabled Enables writable stack traces. When set to false, Exception.getStackTrace returns a zero length array. This may be used to reduce log spam when the circuit breaker is open as the cause of the exceptions is already known (the circuit breaker is short-circuiting calls). true Boolean camel.rest.api-component The name of the Camel component to use as the REST API (such as swagger) If no API Component has been explicit configured, then Camel will lookup if there is a Camel component responsible for servicing and generating the REST API documentation, or if a org.apache.camel.spi.RestApiProcessorFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.api-context-path Sets a leading API context-path the REST API services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. String camel.rest.api-context-route-id Sets the route id to use for the route that services the REST API. The route will by default use an auto assigned route id. String camel.rest.api-host To use an specific hostname for the API documentation (eg swagger) This can be used to override the generated host with this configured hostname. String camel.rest.api-property Allows to configure as many additional properties for the api documentation (swagger). For example set property api.title to my cool stuff. Map camel.rest.api-vendor-extension Whether vendor extension is enabled in the Rest APIs. If enabled then Camel will include additional information as vendor extension (eg keys starting with x-) such as route ids, class names etc. Not all 3rd party API gateways and tools supports vendor-extensions when importing your API docs. false Boolean camel.rest.binding-mode Sets the binding mode to use. The default value is off. RestBindingMode camel.rest.client-request-validation Whether to enable validation of the client request to check whether the Content-Type and Accept headers from the client is supported by the Rest-DSL configuration of its consumes/produces settings. This can be turned on, to enable this check. In case of validation error, then HTTP Status codes 415 or 406 is returned. The default value is false. false Boolean camel.rest.component The Camel Rest component to use for the REST transport (consumer), such as netty-http, jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.component-property Allows to configure as many additional properties for the rest component in use. Map camel.rest.consumer-property Allows to configure as many additional properties for the rest consumer in use. Map camel.rest.context-path Sets a leading context-path the REST services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. Or for components such as camel-jetty or camel-netty-http that includes a HTTP server. String camel.rest.cors-headers Allows to configure custom CORS headers. Map camel.rest.data-format-property Allows to configure as many additional properties for the data formats in use. For example set property prettyPrint to true to have json outputted in pretty mode. The properties can be prefixed to denote the option is only for either JSON or XML and for either the IN or the OUT. The prefixes are: json.in. json.out. xml.in. xml.out. For example a key with value xml.out.mustBeJAXBElement is only for the XML data format for the outgoing. A key without a prefix is a common key for all situations. Map camel.rest.enable-cors Whether to enable CORS headers in the HTTP response. The default value is false. false Boolean camel.rest.endpoint-property Allows to configure as many additional properties for the rest endpoint in use. Map camel.rest.host The hostname to use for exposing the REST service. String camel.rest.host-name-resolver If no hostname has been explicit configured, then this resolver is used to compute the hostname the REST service will be using. RestHostNameResolver camel.rest.json-data-format Name of specific json data format to use. By default json-jackson will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.port The port number to use for exposing the REST service. Notice if you use servlet component then the port number configured here does not apply, as the port number in use is the actual port number the servlet component is using. eg if using Apache Tomcat its the tomcat http port, if using Apache Karaf its the HTTP service in Karaf that uses port 8181 by default etc. Though in those situations setting the port number here, allows tooling and JMX to know the port number, so its recommended to set the port number to the number that the servlet engine uses. String camel.rest.producer-api-doc Sets the location of the api document (swagger api) the REST producer will use to validate the REST uri and query parameters are valid accordingly to the api document. This requires adding camel-swagger-java to the classpath, and any miss configuration will let Camel fail on startup and report the error(s). The location of the api document is loaded from classpath by default, but you can use file: or http: to refer to resources to load from file or http url. String camel.rest.producer-component Sets the name of the Camel component to use as the REST producer. String camel.rest.scheme The scheme to use for exposing the REST service. Usually http or https is supported. The default value is http. String camel.rest.skip-binding-on-error-code Whether to skip binding on output if there is a custom HTTP error code header. This allows to build custom error messages that do not bind to json / xml etc, as success messages otherwise will do. false Boolean camel.rest.use-x-forward-headers Whether to use X-Forward headers for Host and related setting. The default value is true. true Boolean camel.rest.xml-data-format Name of specific XML data format to use. By default jaxb will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.api-context-id-pattern Deprecated Sets an CamelContext id pattern to only allow Rest APIs from rest services within CamelContext's which name matches the pattern. The pattern name refers to the CamelContext name, to match on the current CamelContext only. For any other value, the pattern uses the rules from PatternHelper#matchPattern(String,String). String camel.rest.api-context-listing Deprecated Sets whether listing of all available CamelContext's with REST services in the JVM is enabled. If enabled it allows to discover these contexts, if false then only the current CamelContext is in use. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency>", "simple(\"USD{body.address}\") simple(\"USD{body.address.street}\") simple(\"USD{body.address.zip}\")", "simple(\"USD{body.address}\") simple(\"USD{body.getAddress.getStreet}\") simple(\"USD{body.address.getZip}\") simple(\"USD{body.doSomething}\")", "simple(\"USD{body?.address?.street}\")", "simple(\"USD{body[foo].name}\")", "simple(\"USD{body['foo bar'].name}\")", "simple(\"USD{body[foo]}\") simple(\"USD{body[this.is.foo]}\")", "simple(\"USD{body[foo]?.name}\")", "simple(\"USD{body.address.lines[0]}\") simple(\"USD{body.address.lines[1]}\") simple(\"USD{body.address.lines[2]}\")", "simple(\"USD{body.address.lines[last]}\")", "simple(\"USD{body.address.lines[last-1]}\")", "simple(\"USD{body.address.lines[last-2]}\")", "simple(\"USD{body.address.lines.size}\")", "String[] lines = new String[]{\"foo\", \"bar\", \"cat\"}; exchange.getIn().setBody(lines); simple(\"There are USD{body.length} lines\")", "simple(\"USD{body.address.zip} > 1000\")", "USD{leftValue} OP rightValue", "USD{leftValue} OP rightValue && USD{leftValue} OP rightValue", "USD{leftValue} OP rightValue || USD{leftValue} OP rightValue", "// exact equals match simple(\"USD{header.foo} == 'foo'\") // ignore case when comparing, so if the header has value FOO this will match simple(\"USD{header.foo} =~ 'foo'\") // here Camel will type convert '100' into the type of header.bar and if it is an Integer '100' will also be converter to an Integer simple(\"USD{header.bar} == '100'\") simple(\"USD{header.bar} == 100\") // 100 will be converter to the type of header.bar so we can do > comparison simple(\"USD{header.bar} > 100\")", "simple(\"100 < USD{header.bar}\")", "// testing for null simple(\"USD{header.baz} == null\") // testing for not null simple(\"USD{header.baz} != null\")", "simple(\"USD{header.date} == USD{date:now:yyyyMMdd}\") simple(\"USD{header.type} == USD{bean:orderService?method=getOrderType}\")", "simple(\"USD{header.title} contains 'Camel'\")", "simple(\"USD{header.number} regex '\\\\d{4}'\")", "simple(\"USD{header.type} in 'gold,silver'\")", "simple(\"USD{header.type} !in 'gold,silver'\")", "simple(\"USD{header.type} is 'java.lang.String'\")", "simple(\"USD{header.type} is 'String'\")", "simple(\"USD{header.number} range 100..199\")", "simple(\"USD{header.number} range '100..199'\")", "<from uri=\"seda:orders\"> <filter> <simple>USD{header.type} == 'widget'</simple> <to uri=\"bean:orderService?method=handleWidget\"/> </filter> </from>", "simple(\"USD{header.title} contains 'Camel' && USD{header.type'} == 'gold'\")", "simple(\"USD{header.title} contains 'Camel' || USD{header.type'} == 'gold'\")", "<from uri=\"seda:orders\"> <filter> <simple>USD{header.foo}</simple> <to uri=\"mock:fooOrders\"/> </filter> </from>", "from(\"seda:orders\") .filter().simple(\"USD{header.foo}\") .to(\"seda:fooOrders\");", "from(\"direct:hello\") .transform().simple(\"Hello USD{header.user} how are you?\") .to(\"mock:reply\");", "from(\"direct:hello\") .transform().simple(\"The today is USD{date:now:yyyyMMdd} and it is a great day.\") .to(\"mock:reply\");", "from(\"direct:order\") .transform().simple(\"OrderId: USD{bean:orderIdGenerator}\") .to(\"mock:reply\");", "from(\"direct:order\") .transform().simple(\"OrderId: USD{bean:orderIdGenerator.generateId}\") .to(\"mock:reply\");", "from(\"direct:order\") .transform().simple(\"OrderId: USD{bean:orderIdGenerator?method=generateId}\") .to(\"mock:reply\");", "<transform> <simple>Hello USD{bodyAs(String)} how are you?</simple> </transform>", "<transform> <simple>The gold value is USD{header.type[gold]}</simple> </transform>", "<setHeader name=\"myHeader\"> <simple>USD{properties:USD{header.someKey}}</simple> </setHeader>", ".setHeader(\"cool\", simple(\"true\", Boolean.class))", "<setHeader name=\"cool\"> <!-- use resultType to indicate that the type should be a java.lang.Boolean --> <simple resultType=\"java.lang.Boolean\">true</simple> </setHeader>", "<transform> <simple>The following text\\nis on a new line</simple> </transform>", "<setBody> <simple trim=\"false\">You get some trailing whitespace characters. </simple> </setBody>", ".setHeader(\"myHeader\").simple(\"resource:classpath:mysimple.txt\")" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-simple-language-starter
Command Line Interface Reference
Command Line Interface Reference Red Hat OpenStack Platform 16.2 Command-line clients for Red Hat OpenStack Platform OpenStack Documentation Team [email protected] Abstract A reference to the commands available to the unified OpenStack command-line client.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/index
Chapter 4. 4.1.2 Release Notes
Chapter 4. 4.1.2 Release Notes 4.1. New Features This following major enhancement has been introduced in Red Hat Update Infrastructure 4.1.2. New RHUI Installer argument: --ignore-newer-rhui-packages Previously, when you reran an instance of RHUI Installer which was not up-to-date, RHUI Installer updated the RHUI packages if newer versions were available. As a result, the RHUI system would enter an inconsistent state since the older version of the RHUI Installer package was not aware of potential changes to the RHUI package set. In addition, running RHUI Installer to change a setting, without updating RHUI in the process, was not supported. With this update, when you rerun RHUI Installer, it checks whether a newer version is available. If available, RHUI Installer displays an error message stating that you must either update the RHUI Installer package, or rerun the RHUI Installer command using the --ignore-newer-rhui-packages argument. The --ignore-newer-rhui-packages argument prevents the installer from applying any RHUI updates. 4.2. Known Issues This part describes known issues in Red Hat Update Infrastructure 4.1.2. rhui-installer ignores custom RHUI CA when updating to a newer version of RHUI When updating from RHUI version 4.1.0 or older, rhui-installer ignores the existing custom RHUI CA and generates a new RHUI CA regardless of the parameter value set in the answer.yml file. Consequently, RHUI fails to recognize clients which use the older RHUI CA. To work around this problem, specify the custom RHUI CA when running the rhui-installer --rerun command. For more information, see Updating Red Hat Update Infrastructure .
[ "rhui-installer --rerun --user-supplied-rhui-ca-crt <custom_RHUI_CA.crt> --user-supplied-rhui-ca-key <custom_RHUI_CA_key>" ]
https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/release_notes/assembly_4-1-2-release-notes_release-notes
Chapter 3. Migrating from Camel 2 to Camel 3
Chapter 3. Migrating from Camel 2 to Camel 3 Camel Extensions for Quarkus supports Camel version 3 whereas Fuse 7 supported Camel version 2. This section provides information relating to upgrading Camel when you migrate your Red Hat Fuse 7 application to Camel Extensions for Quarkus. 3.1. Java versions Camel 3 supports Java 17 and Java 11 but not Java 8. 3.2. Modularization of camel-core In Camel 3.x, camel-core has been split into many JARs as follows: camel-api camel-base camel-caffeine-lrucache camel-cloud camel-core camel-jaxp camel-main camel-management-api camel-management camel-support camel-util camel-util-json Maven users of Apache Camel can keep using the dependency camel-core which has transitive dependencies on all of its modules, except for camel-main , and therefore no migration is needed. 3.3. Modularization of Components In Camel 3.x, some of the camel-core components are moved into individual components. camel-attachments camel-bean camel-browse camel-controlbus camel-dataformat camel-dataset camel-direct camel-directvm camel-file camel-language camel-log camel-mock camel-ref camel-rest camel-saga camel-scheduler camel-seda camel-stub camel-timer camel-validator camel-vm camel-xpath camel-xslt camel-xslt-saxon camel-zip-deflater 3.4. Multiple CamelContexts per application not supported Support for multiple CamelContexts has been removed and only one CamelContext per deployment is recommended and supported. The context attribute on the various Camel annotations such as @EndpointInject , @Produce , @Consume etc. has therefore been removed. 3.5. Deprecated APIs and Components All deprecated APIs and components from Camel 2.x have been removed in Camel 3. 3.5.1. Removed components All deprecated components from Camel 2.x are removed in Camel 3.x, including the old camel-http , camel-hdfs , camel-mina , camel-mongodb , camel-netty , camel-netty-http , camel-quartz , camel-restlet and camel-rx components. Removed camel-jibx component. Removed camel-boon dataformat. Removed the camel-linkedin component as the Linkedin API 1.0 is no longer supported . Support for the new 2.0 API is tracked by CAMEL-13813 . The camel-zookeeper has its route policy functionality removed, instead use ZooKeeperClusterService or the camel-zookeeper-master component. The camel-jetty component no longer supports producer (which has been removed), use camel-http component instead. The twitter-streaming component has been removed as it relied on the deprecated Twitter Streaming API and is no longer functional. 3.5.2. Renamed components Following components are renamed in Camel 3.x. The Camel-microprofile-metrics has been renamed to camel-micrometer The test component has been renamed to dataset-test and moved out of camel-core into camel-dataset JAR. The http4 component has been renamed to http , and it's corresponding component package from org.apache.camel.component.http4 to org.apache.camel.component.http . The supported schemes are now only http and https . The hdfs2 component has been renamed to hdfs , and it's corresponding component package from org.apache.camel.component.hdfs2 to org.apache.camel.component.hdfs . The supported scheme is now hdfs . The mina2 component has been renamed to mina , and it's corresponding component package from org.apache.camel.component.mina2 to org.apache.camel.component.mina . The supported scheme is now mina . The mongodb3 component has been renamed to mongodb , and it's corresponding component package from org.apache.camel.component.mongodb3 to org.apache.camel.component.mongodb . The supported scheme is now mongodb . The netty4-http component has been renamed to netty-http , and it's corresponding component package from org.apache.camel.component.netty4.http to org.apache.camel.component.netty.http . The supported scheme is now netty-http . The netty4 component has been renamed to netty , and it's corresponding component package from org.apache.camel.component.netty4 to org.apache.camel.component.netty . The supported scheme is now netty . The quartz2 component has been renamed to quartz , and it's corresponding component package from org.apache.camel.component.quartz2 to org.apache.camel.component.quartz . The supported scheme is now quartz . The rxjava2 component has been renamed to rxjava , and it's corresponding component package from org.apache.camel.component.rxjava2 to org.apache.camel.component.rxjava . Renamed camel-jetty9 to camel-jetty . The supported scheme is now jetty . 3.6. Changes to Camel components 3.6.1. Mock component The mock component has been moved out of camel-core . Because of this a number of methods on its assertion clause builder are removed. 3.6.2. ActiveMQ If you are using the activemq-camel component, then you should migrate to use camel-activemq component, where the component name has changed from org.apache.activemq.camel.component.ActiveMQComponent to org.apache.camel.component.activemq.ActiveMQComponent . 3.6.3. AWS The component camel-aws has been split into multiple components: camel-aws-cw camel-aws-ddb (which contains both ddb and ddbstreams components) camel-aws-ec2 camel-aws-iam camel-aws-kinesis (which contains both kinesis and kinesis-firehose components) camel-aws-kms camel-aws-lambda camel-aws-mq camel-aws-s3 camel-aws-sdb camel-aws-ses camel-aws-sns camel-aws-sqs camel-aws-swf Note It is recommended to add specifc dependencies for these components. 3.6.4. Camel CXF The camel-cxf JAR has been divided into SOAP vs REST and Spring and non Spring JARs. It is recommended to choose the specific JAR from the following list when migrating from came-cxf . camel-cxf-soap camel-cxf-spring-soap camel-cxf-rest camel-cxf-spring-rest camel-cxf-transport camel-cxf-spring-transport For example, if you were using CXF for SOAP and with Spring XML, then select camel-cxf-spring-soap and camel-cxf-spring-transport when migrating from camel-cxf . When using Spring Boot, choose from the following starter when you migrate from camel-cxf-starter to SOAP or REST: camel-cxf-soap-starter camel-cxf-rest-starter The camel-cxf XML XSD schemas has also changed namespaces. Table 3.1. Changes to namespaces Old Namespace New Namespace http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/jaxws http://camel.apache.org/schema/cxf/camel-cxf.xsd http://camel.apache.org/schema/cxf/jaxws/camel-cxf.xsd http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/jaxrs http://camel.apache.org/schema/cxf/camel-cxf.xsd http://camel.apache.org/schema/cxf/jaxrs/camel-cxf.xsd The camel-cxf SOAP component is moved to a new jaxws sub-package, that is, org.apache.camel.component.cxf is now org.apache.camel.component.cxf.jaws . For example, the CxfComponent class is now located in org.apache.camel.component.cxf.jaxws . 3.6.5. FHIR The camel-fhir component has upgraded it's hapi-fhir dependency to 4.1.0. The default FHIR version has been changed to R4. Therefore if DSTU3 is desired it has to be explicitly set. 3.6.6. Kafka The camel-kafka component has removed the options bridgeEndpoint and circularTopicDetection as this is no longer needed as the component is acting as bridging would work on Camel 2.x. In other words camel-kafka will send messages to the topic from the endpoint uri. To override this use the KafkaConstants.OVERRIDE_TOPIC header with the new topic. See more details in the camel-kafka component documentation. 3.6.7. Telegram The camel-telegram component has moved the authorization token from uri-path to a query parameter instead, e.g. migrate to 3.6.8. JMX If you run Camel standalone with just camel-core as a dependency, and you want JMX enabled out of the box, then you need to add camel-management as a dependency. For using ManagedCamelContext you now need to get this extension from CamelContext as follows: 3.6.9. XSLT The XSLT component has moved out of camel-core into camel-xslt and camel-xslt-saxon . The component is separated so camel-xslt is for using the JDK XSTL engine (Xalan), and camel-xslt-saxon is when you use Saxon. This means that you should use xslt and xslt-saxon as component name in your Camel endpoint URIs. If you are using XSLT aggregation strategy, then use org.apache.camel.component.xslt.saxon.XsltSaxonAggregationStrategy for Saxon support. And use org.apache.camel.component.xslt.saxon.XsltSaxonBuilder for Saxon support if using xslt builder. Also notice that allowStax is also only supported in camel-xslt-saxon as this is not supported by the JDK XSLT. 3.6.10. XML DSL Migration The XML DSL has been changed slightly. The custom load balancer EIP has changed from <custom> to <customLoadBalancer> The XMLSecurity data format has renamed the attribute keyOrTrustStoreParametersId to keyOrTrustStoreParametersRef in the <secureXML> tag. The <zipFile> data format has been renamed to <zipfile> . 3.7. Migrating Camel Maven Plugins The camel-maven-plugin has been split up into two maven plugins: camel-maven-plugin camel-maven-plugin has the run goal, which is intended for quickly running Camel applications standalone. See https://camel.apache.org/manual/camel-maven-plugin.html for more information. camel-report-maven-plugin The camel-report-maven-plugin has the validate and route-coverage goals which is used for generating reports of your Camel projects such as validating Camel endpoint URIs and route coverage reports, etc. See https://camel.apache.org/manual/camel-report-maven-plugin.html for more information.
[ "telegram:bots/myTokenHere", "telegram:bots?authorizationToken=myTokenHere", "ManagedCamelContext managed = camelContext.getExtension(ManagedCamelContext.class);" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_extensions_for_quarkus/2.13/html/migrating_fuse_7_applications_to_camel_extensions_for_quarkus/migrating-from-camel-2-to-camel-3
Chapter 1. Multi-site deployments
Chapter 1. Multi-site deployments Red Hat build of Keycloak supports deployments that consist of multiple Red Hat build of Keycloak instances that connect to each other using its Infinispan caches; load balancers can distribute the load evenly across those instances. Those setups are intended for a transparent network on a single site. The Red Hat build of Keycloak high-availability guide goes one step further to describe setups across multiple sites. While this setup adds additional complexity, that extra amount of high availability may be needed for some environments. The different chapters introduce the necessary concepts and building blocks. For each building block, a blueprint shows how to set a fully functional example. Additional performance tuning and security hardening are still recommended when preparing a production setup.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/high_availability_guide/introduction-
4.9. atlas
4.9. atlas 4.9.1. RHEA-2011:1582 - atlas enhancement update Updated atlas packages that add various enhancements are now available for Red Hat Enterprise Linux 6. The ATLAS (Automatically Tuned Linear Algebra Software) project is a research effort focusing on applying empirical techniques providing portable performance. The atlas packages provide C and Fortran77 interfaces to a portably efficient BLAS (Basic Linear Algebra Subprograms) implementation and routines from LAPACK (Linear Algebra PACKage). The atlas packages have been upgraded to upstream version 3.8.4, which adds a number of enhancements over the version. The atlas package now contains subpackages optimized for Linux on IBM System z architectures. (BZ# 694459 ) All users of atlas are advised to upgrade to these updated packages, which add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/atlas
2.2. Web Server Configuration
2.2. Web Server Configuration The following procedure configures an Apache HTTP server. Ensure that the Apache HTTP server is installed on each node in the cluster. You also need the wget tool installed on the cluster to be able to check the status of the Apache HTTP server. On each node, execute the following command. In order for the Apache resource agent to get the status of the Apache HTTP server, ensure that the following text is present in the /etc/httpd/conf/httpd.conf file on each node in the cluster, and ensure that it has not been commented out. If this text is not already present, add the text to the end of the file. When you use the apache resource agent to manage Apache, it does not use systemd . Because of this, you must edit the logrotate script supplied with Apache so that it does not use systemctl to reload Apache. Remove the following line in the /etc/logrotate.d/httpd file on each node in the cluster. Replace the line you removed with the following three lines. Create a web page for Apache to serve up. On one node in the cluster, mount the file system you created in Section 2.1, "Configuring an LVM Volume with an ext4 File System" , create the file index.html on that file system, then unmount the file system.
[ "yum install -y httpd wget", "<Location /server-status> SetHandler server-status Require local </Location>", "/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true", "/usr/bin/test -f /run/httpd.pid >/dev/null 2>/dev/null && /usr/bin/ps -q USD(/usr/bin/cat /run/httpd.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c \"PidFile /run/httpd.pid\" -k graceful > /dev/null 2>/dev/null || true", "mount /dev/my_vg/my_lv /var/www/ mkdir /var/www/html mkdir /var/www/cgi-bin mkdir /var/www/error restorecon -R /var/www cat <<-END >/var/www/html/index.html <html> <body>Hello</body> </html> END umount /var/www" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/s1-webserversetup-haaa
9.2. Managing User Entries
9.2. Managing User Entries 9.2.1. About Username Formats The default length for usernames is 32 characters. IdM supports a wide range of username formats, based on this regular expression: Note The trailing USD symbol is permitted for Samba 3.x machine support. Any system limits - such as starting a username with a number on Unix systems - apply to the usernames in IdM. Note Usernames are case insensitive when they are created, meaning that any case letter can be entered but case is ignored when the username is saved. Username are automatically normalized to be all lower case, even if the user is created with mixed case or upper case letters. 9.2.2. Adding Users 9.2.2.1. From the Web UI Open the Identity tab, and select the Users subtab. Click the Add link at the top of the users list. Fill in the user's first and last names. The user login (UID) is automatically generated based on the user's full name, but this can be set manually by clicking the Optional field link. Note Usernames are case insensitive when they are created, meaning that case is ignored. Username are automatically normalized to be all lower case, even if the user is created with mixed case or upper case letters. Click the Add and Edit button to go directly to the expanded entry page and fill in more attribute information, as in Section 9.2.3.1, "From the Web UI" . The user entry is created with some basic information already filled in, based on the given user information and the user entry template. 9.2.2.2. From the Command Line New user entries are added with the user-add command. Attributes (listed in Table 9.2, "Default Identity Management User Attributes" ) can be added to the entry with specific values or the command can be run with no arguments. When no arguments are used, the command prompts for the required user account information and uses the defaults for the other attributes, with the defaults printed below. For example: Any of the user attributes can be passed with the command. This will either set values for optional attributes or override the default values for default attributes. Note Usernames are case insensitive when they are created, meaning that case is ignored. Username are automatically normalized to be all lower case, even if the user is created with mixed case or upper case letters. Important When a user is created without specifying a UID or GID number, then the user account is automatically assigned an ID number that is available in the server or replica range. (Number ranges are described more in Section 9.9, "Managing Unique UID and GID Number Assignments" .) This means that a user always has a unique number for its UID number and, if configured, for its private group. If a number is manually assigned to a user entry, the server does not validate that the uidNumber is unique. It will allow duplicate IDs; this is expected (though discouraged) behavior for POSIX entries. If two entries are assigned the same ID number, only the first entry is returned in a search for that ID number. However, both entries will be returned in searches for other attributes or with ipa user-find --all . 9.2.3. Editing Users 9.2.3.1. From the Web UI Open the Identity tab, and select the Users subtab. Click the name of the user to edit. There are a number of different types of attributes that can be edited for the user. All of the default attributes are listed in Table 9.2, "Default Identity Management User Attributes" . Most of the attributes in the Identity Settings and Account Settings areas have default values filled in for them, based on the user information or on the user entry template. Edit the fields or, if necessary, click the Add link by an attribute to create the attribute on the entry. When the edits are done, click the Update link at the top of the page. 9.2.3.2. From the Command Line The user-mod command edits user accounts by adding or changing attributes. At its most basic, the user-mod specifies the user account by login ID, the attribute to edit, and the new value: For example, to change a user's work title from Editor II to Editor III : Identity Management allows multi-valued attributes, based on attributes in LDAP that are allowed to have multiple values. For example, a person may have two email addresses, one for work and one for personal, that are both stored in the mail attribute. Managing multi-valued attributes can be done using the --addattr option. If an attribute allows multiple values - like mail - simply using the command-line argument will overwrite the value with the new value. This is also true for using --setattr . However, using --addattr will add a new attribute; for a multi-valued attribute, it adds the new value in addition to any existing values. Example 9.1. Multiple Mail Attributes A user is created first using his work email account. Then, his personal email account is added. Both email addresses are listed for the user. To set two values at the same time, use the --addattr option twice: 9.2.4. Deleting Users Deleting a user account permanently removes the user entry and all its information from IdM, including group memberships and passwords. External configuration - like a system account and home directory - will still exist on any server or local machine where they were created, but they cannot be accessed through IdM. Deleting a user account is permanent. The information cannot be recovered; a new account must be created. Note If all admin users are deleted, then you must use the Directory Manager account to create a new administrative user. Alternatively, any user who belongs in the group management role can also add a new admin user. 9.2.4.1. With the Web UI Open the Identity tab, and select the Users subtab. Select the checkboxes by the names of the users to delete. Click the Delete link at the top of the task area. When prompted, confirm the delete action. 9.2.4.2. From the Command Line Users are deleted using the user-del command and then the user login. For example, a single user: To delete multiple users, simply list the users, separated by spaces. When deleting multiple users, use the --continue option to force the command to continue regardless of errors. A summary of the successful and failed operations is printed to stdout when the command completes. If --continue is not used, then the command proceeds with deleting users until it encounters an error, and then it exits.
[ "[a-zA-Z0-9_.][a-zA-Z0-9_.-]{0,252}[a-zA-Z0-9_.USD-]?", "[bjensen@server ~]USD ipa user-add [ username ] [ attributes ]", "[bjensen@server ~]USD ipa user-add First name: John Last name: Smith User login [jsmith]: jsmith -------------------- Added user \"jsmith\" -------------------- User login: jsmith First name: John Last name: Smith Full name: John Smith Display name: John Smith Initials: JS Home directory: /home/jsmith GECOS: John Smith Login shell: /bin/sh Kerberos principal: [email protected] Email address: [email protected] UID: 882600007 GID: 882600007 Password: False Member of groups: ipausers Kerberos keys available: False", "[bjensen@server ~]USD ipa user-add jsmith --first=John --last=Smith --manager=bjensen [email protected] --homedir=/home/work/johns --password", "[bjensen@server ~]USD ipa user-mod loginID -- attributeName=newValue", "[bjensen@server ~]USD ipa user-mod jsmith --title=\"Editor III\"", "[bjensen@server ~]USD ipa user-add jsmith --first=John --last=Smith [email protected]", "[bjensen@server ~]USD ipa user-mod jsmith [email protected]", "[bjensen@server ~]USD ipa user-find jsmith --all -------------- 1 user matched -------------- dn: uid=jsmith,cn=users,cn=accounts,dc=example,dc=com User login: jsmith .. Email address: [email protected], [email protected]", "[bjensen@server ~]USD ipa user-add jsmith --first=John --last=Smith [email protected] [email protected] [email protected]", "[bjensen@server ~]USD ipa user-del jsmith", "[bjensen@server ~]USD ipa user-del jsmith bjensen mreynolds cdickens" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/managing-users
18.17. Kdump
18.17. Kdump Use this screen to select whether or not to use Kdump on this system. Kdump is a kernel crash dumping mechanism which, in the event of a system crash, captures information that can be invaluable in determining the cause of the crash. Note that if you enable Kdump , you must reserve a certain amount of system memory for it. As a result, less memory is available for your processes. If you do not want to use Kdump on this system, uncheck Enable kdump . Otherwise, set the amount of memory to reserve for Kdump . You can let the installer reserve a reasonable amount automatically, or you can set any amount manually. When your are satisfied with the settings, click Done to save the configuration and return to the screen. Figure 18.35. Kdump Enablement and Configuration
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-kdump-s390
12.6. Setting ethers Information for a Host
12.6. Setting ethers Information for a Host NIS can host an ethers table which can be used to manage DHCP configuration files for systems based on their platform, operating system, DNS domain, and MAC address - all information stored in host entries in IdM. In Identity Management, each system is created with a corresponding ethers entry in the directory, in the ou=ethers subtree. This entry is used to create a NIS map for the ethers service which can be managed by the NIS compatibility plug-in in IdM. To configure NIS maps for ethers entries: Add the MAC address attribute to a host entry. For example: Open the nsswitch.conf file. Add a line for the ethers service, and set it to use LDAP for its lookup. Check that the ethers information is available for the client.
[ "cn=server,ou=ethers,dc=example,dc=com", "[jsmith@server ~]USD kinit admin [jsmith@server ~]USD ipa host-mod --macaddress=12:34:56:78:9A:BC server.example.com", "ethers: ldap", "getent ethers server.example.com" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/host-ethers
Chapter 28. SystemProperty schema reference
Chapter 28. SystemProperty schema reference Used in: JvmOptions Property Property type Description name string The system property name. value string The system property value.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-systemproperty-reference
Chapter 6. Configuring Red Hat High Availability Clusters on Google Cloud Platform
Chapter 6. Configuring Red Hat High Availability Clusters on Google Cloud Platform This chapter includes information and procedures for configuring a Red Hat High Availability (HA) cluster on Google Cloud Platform (GCP) using Google Compute Engine (GCE) virtual machine (VM) instances as cluster nodes. The chapter includes prerequisite procedures for setting up your environment for GCP. Once you have set up your environment, you can create and configure GCP VM instances. The chapter also includes procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on GCP. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing GCP network resource agents. The chapter refers to GCP documentation in a number of places. For more information, see the referenced GCP documentation. Prerequisites You need to install the GCP software development kit (SDK). For more information see, Installing the Google cloud SDK . Enable your subscriptions in the Red Hat Cloud Access program . The Red Hat Cloud Access program allows you to move your Red Hat Subscription from physical or on-premise systems onto GCP with full support from Red Hat. You must belong to an active GCP project and have sufficient permissions to create resources in the project. Your project should have a service account that belongs to a VM instance and not an individual user. See Using the Compute Engine Default Service Account for information about using the default service account instead of creating a separate service account. If you or your project administrator create a custom service account, the service account should be configured for the following roles. Cloud Trace Agent Compute Admin Compute Network Admin Cloud Datastore User Logging Admin Monitoring Editor Monitoring Metric Writer Service Account Administrator Storage Admin Additional resources Support Policies for RHEL High Availability Clusters - Google Cloud Platform Virtual Machines as Cluster Members Support Policies for RHEL High Availability clusters - Transport Protocols VPC network overview Exploring RHEL High Availability's Components, Concepts, and Features - Overview of Transport Protocols Design Guidance for RHEL High Availability Clusters - Selecting the Transport Protocol Quickstart for Red Hat and Centos 6.1. Red Hat Enterprise Linux image options on GCP The following table lists image choices and the differences in the image options. Table 6.1. Image options Image option Subscriptions Sample scenario Considerations Choose to deploy a custom image that you move to GCP. Leverage your existing Red Hat subscriptions. Enable subscriptions through the Red Hat Cloud Access program , upload your custom image, and attach your subscriptions. The subscription includes the Red Hat product cost; you pay all other instance costs. Custom images that you move to GCP are called "Cloud Access" images because you leverage your existing Red Hat subscriptions. Red Hat provides support directly for Cloud Access images. Choose to deploy an existing GCP image that includes RHEL. The GCP images include a Red Hat product. Choose a RHEL image when you launch an instance on the GCP Compute Engine , or choose an image from the Google Cloud Platform Marketplace . You pay GCP hourly on a pay-as-you-go model. Such images are called "on-demand" images. GCP offers support for on-demand images through a support agreement. Important You cannot convert an on-demand instance to a Red Hat Cloud Access instance. To change from an on-demand image to a Red Hat Cloud Access bring-your-own subscription (BYOS) image, create a new Red Hat Cloud Access instance and migrate data from your on-demand instance. Cancel your on-demand instance after you migrate your data to avoid double billing. The remainder of this chapter includes information and procedures pertaining to custom images. Additional resources Red Hat in the Public Cloud Images Red Hat Cloud Access Reference Guide Creating an instance from a custom image 6.2. Required system packages The procedures in this chapter assume you are using a host system running Red Hat Enterprise Linux. To successfully complete the procedures, your host system must have the following packages installed. Table 6.2. System packages Package Description Command qemu-kvm This package provides the user-level KVM emulator and facilitates communication between hosts and guest VMs. # yum install qemu-kvm libvirt qemu-img This package provides disk management for guest VMs. The qemu-img package is installed as a dependency of the qemu-kvm package. libvirt This package provides the server and host-side libraries for interacting with hypervisors and host systems and the libvirtd daemon that handles the library calls, manages VMs, and controls the hypervisor. Table 6.3. Additional Virtualization Packages Package Description Command virt-install This package provides the virt-install command for creating VMs from the command line. # yum install virt-install libvirt-python virt-manager virt-install libvirt-client libvirt-python This package contains a module that permits applications written in the Python programming language to use the interface supplied by the libvirt API. virt-manager This package provides the virt-manager tool, also known as Virtual Machine Manager (VMM). VMM is a graphical tool for administering VMs. It uses the libvirt-client library as the management API. libvirt-client This package provides the client-side APIs and libraries for accessing libvirt servers. The libvirt-client package includes the virsh command line tool to manage and control VMs and hypervisors from the command line or a special virtualization shell. Additional resources Installing Virtualization Packages Manually 6.3. Installing the HA packages and agents Complete the following steps on all nodes to install the High Availability packages and agents. Procedure Disable all repositories. Enable RHEL 7 server and RHEL 7 server HA repositories. Update all packages. Install pcs pacemaker fence agent and resource agent. Reboot the machine if the kernel is updated. 6.4. Configuring HA services Complete the following steps on all nodes to configure High Availability services. Procedure The user hacluster was created during the pcs and pacemaker installation in the step. Create a password for the user hacluster on all cluster nodes. Use the same password for all nodes. If the firewalld service is enabled, add the high availability service to RHEL. Start the pcs service and enable it to start on boot. Verification steps Ensure the pcs service is running. 6.5. Creating a cluster Complete the following steps to create the cluster of nodes. Procedure On one of the nodes, enter the following command to authenticate the pcs user ha cluster . Specify the name of each node in the cluster. Example: Create the cluster. Verification steps Enable the cluster. Start the cluster. 6.6. Creating a fence device For most default configurations, the GCP instance names and the RHEL host names are identical. Complete the following steps to configure fencing from any node in the cluster. Procedure Get the GCP instance names from any node in the cluster. Note that the output also shows the internal ID for the instance. Example: Create a fence device. Use the pcmk_host-name command to map the RHEL host name with the instance ID. Example: Verification steps Test the fencing agent for one of the other nodes. Check the status to verify that the node is fenced. Example: 6.7. Configuring GCP node authorization Configure cloud SDK tools to use your account credentials to access GCP. Procedure Enter the following command on each node to initialize each node with your project ID and account credentials. 6.8. Configuring the GCP network resource agent The cluster uses GCP network resource agents attached to a secondary IP address (alias IP) to a running instance. This is a floating IP address that can be passed between different nodes in the cluster. Procedure Enter the following command to view the GCP virtual IP address resource agent (gcp-vpc-move-vip) description. This shows the options and default operations for this agent. You can configure the resource agent to use a primary subnet address range or a secondary subnet address range. This section includes procedures for both. Primary subnet address range Procedure Complete the following steps to configure the resource for the primary VPC subnet. Create the aliasip resource. Include an unused internal IP address. Include the CIDR block in the command. Create an IPaddr2 resource for managing the IP on the node. Group the network resources under vipgrp . Verification steps Verify that the resources have started and are grouped under vipgrp . Verify that the resource can move to a different node. Example: Verify that the vip successfully started on a different node. Secondary subnet address range Complete the following steps to configure the resource for a secondary subnet address range. Prerequisites Create a custom network and subnet Procedure Create a secondary subnet address range. Example: Create the aliasip resource. Create an unused internal IP address in the secondary subnet address range. Include the CIDR block in the command. Create an IPaddr2 resource for managing the IP on the node. Verification steps Verify that the resources have started and are grouped under vipgrp . Verify that the resource can move to a different node. Example: Verify that the vip successfully started on a different node.
[ "subscription-manager repos --disable=*", "subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms", "yum update -y", "yum install -y pcs pacemaker fence-agents-gce resource-agents-gcp", "reboot", "passwd hacluster", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload", "systemctl enable pcsd.service --now", "systemctl is-active pcsd.service", "pcs cluster auth _hostname1_ _hostname2_ _hostname3_ -u hacluster", "pcs cluster auth node01 node02 node03 -u hacluster node01: Authorized node02: Authorized node03: Authorized", "pcs cluster setup --name cluster-name _hostname1_ _hostname2_ _hostname3_", "pcs cluster enable --all", "pcs cluster start --all", "fence_gce --zone _gcp_ _region_ --project= _gcp_ _project_ -o list", "fence_gce --zone us-west1-b --project=rhel-ha-testing-on-gcp -o list 44358**********3181,InstanceName-3 40819**********6811,InstanceName-1 71736**********3341,InstanceName-2", "pcs stonith create _clusterfence_ fence_gce pcmk_host_map=_pcmk-hpst-map_ fence_gce zone=_gcp-zone_ project=_gcpproject_", "pcs stonith create fencegce fence_gce pcmk_host_map=\"node01:node01-vm;node02:node02-vm;node03:node03-vm\" project=hacluster zone=us-east1-b", "pcs stonith fence gcp nodename", "watch pcs status", "watch pcs status Cluster name: gcp-cluster Stack: corosync Current DC: rhel71-node-02 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 27 12:53:25 2018 Last change: Fri Jul 27 12:51:43 2018 by root via cibadmin on rhel71-node-01 3 nodes configured 3 resources configured Online: [ rhel71-node-01 rhel71-node-02 rhel71-node-03 ] Full list of resources: us-east1-b-fence (stonith:fence_gce): Started rhel71-node-01 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled", "gcloud-ra init", "pcs resource describe gcp-vpc-move-vip", "pcs resource create aliasip gcp-vpc-move-vip alias_ip=_UnusedIPaddress/CIDRblock_ --group _group-name_ --group _networking-group_", "pcs resource create vip IPaddr2 nic=_interface_ ip=_AliasIPaddress_ cidr_netmask=32 --group _group-name_ --group _networking-group_", "pcs resource group add vipgrp aliasip vip", "pcs status", "pcs resource move vip _Node_", "pcs resource move vip rhel71-node-03", "pcs status", "gcloud-ra compute networks subnets update _SubnetName_ --region _RegionName_ --add-secondary-ranges _SecondarySubnetName_=_SecondarySubnetRange_", "gcloud-ra compute networks subnets update range0 --region us-west1 --add-secondary-ranges range1=10.10.20.0/24", "pcs resource create aliasip gcp-vpc-move-vip alias_ip=_UnusedIPaddress/CIDRblock_ --group _group-name_ --group _networking-group_", "pcs resource create vip IPaddr2 nic=_interface_ ip=_AliasIPaddress_ cidr_netmask=32 --group _group-name_ --group _networking-group_", "pcs status", "pcs resource move vip _Node_", "pcs resource move vip rhel71-node-03", "pcs status" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-ha-on-gcp_cloud-content
4.4. Fencing
4.4. Fencing In the context of the Red Hat Virtualization environment, fencing is a host reboot initiated by the Manager using a fence agent and performed by a power management device. Fencing allows a cluster to react to unexpected host failures as well as enforce power saving, load balancing, and virtual machine availability policies. Fencing ensures that the role of Storage Pool Manager (SPM) is always assigned to a functional host. If the fenced host was the SPM, the SPM role is relinquished and reassigned to a responsive host. Because the host with the SPM role is the only host that is able to write data domain structure metadata, a non-responsive, un-fenced SPM host causes its environment to lose the ability to create and destroy virtual disks, take snapshots, extend logical volumes, and all other actions that require changes to data domain structure metadata. When a host becomes non-responsive, all of the virtual machines that are currently running on that host can also become non-responsive. However, the non-responsive host retains the lock on the virtual machine hard disk images for virtual machines it is running. Attempting to start a virtual machine on a second host and assign the second host write privileges for the virtual machine hard disk image can cause data corruption. Fencing allows the Red Hat Virtualization Manager to assume that the lock on a virtual machine hard disk image has been released; the Manager can use a fence agent to confirm that the problem host has been rebooted. When this confirmation is received, the Red Hat Virtualization Manager can start a virtual machine from the problem host on another host without risking data corruption. Fencing is the basis for highly-available virtual machines. A virtual machine that has been marked highly-available can not be safely started on an alternate host without the certainty that doing so will not cause data corruption. When a host becomes non-responsive, the Red Hat Virtualization Manager allows a grace period of thirty (30) seconds to pass before any action is taken, to allow the host to recover from any temporary errors. If the host has not become responsive by the time the grace period has passed, the Manager automatically begins to mitigate any negative impact from the non-responsive host. The Manager uses the fencing agent for the power management card on the host to stop the host, confirm it has stopped, start the host, and confirm that the host has been started. When the host finishes booting, it attempts to rejoin the cluster that it was a part of before it was fenced. If the issue that caused the host to become non-responsive has been resolved by the reboot, then the host is automatically set to Up status and is once again capable of starting and hosting virtual machines.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/fencing
Chapter 19. Kubernetes NMState
Chapter 19. Kubernetes NMState 19.1. About the Kubernetes NMState Operator The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster's nodes with NMState. The Kubernetes NMState Operator provides users with functionality to configure various network interface types, DNS, and routing on cluster nodes. Additionally, the daemons on the cluster nodes periodically report on the state of each node's network interfaces to the API server. Important Kubernetes NMState Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. You must not use Kubernetes NMState Operator in both OpenShift Container Platform and Red Hat Virtualization (RHV) at the same time. Such configuration is unsupported. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Before you can use NMState with OpenShift Container Platform, you must install the Kubernetes NMState Operator. Warning When using OVN-Kubernetes, changing the default gateway interface is not supported. 19.1.1. Installing the Kubernetes NMState Operator You must install the Kubernetes NMState Operator from the web console while logged in with administrator privileges. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. Procedure Select Operators OperatorHub . In the search field below All Items , enter nmstate and click Enter to search for the Kubernetes NMState Operator. Click on the Kubernetes NMState Operator search result. Click on Install to open the Install Operator window. Click Install to install the Operator. After the Operator finishes installing, click View Operator . Under Provided APIs , click Create Instance to open the dialog box for creating an instance of kubernetes-nmstate . In the Name field of the dialog box, ensure the name of the instance is nmstate. Note The name restriction is a known issue. The instance is a singleton for the entire cluster. Accept the default settings and click Create to create the instance. Summary Once complete, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. 19.2. Observing node network state Node network state is the network configuration for all nodes in the cluster. 19.2.1. About nmstate OpenShift Container Platform uses nmstate to report on and configure the state of the node network. This makes it possible to modify network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster. Node networking is monitored and updated by the following objects: NodeNetworkState Reports the state of the network on that node. NodeNetworkConfigurationPolicy Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster. NodeNetworkConfigurationEnactment Reports the network policies enacted upon each node. OpenShift Container Platform supports the use of the following nmstate interface types: Linux Bridge VLAN Bond Ethernet Note If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding to the default interface of a host because of a change in the host network topology of OVN-Kubernetes. As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider. 19.2.2. Viewing the network state of a node A NodeNetworkState object exists on every node in the cluster. This object is periodically updated and captures the state of the network for that node. Procedure List all the NodeNetworkState objects in the cluster: USD oc get nns Inspect a NodeNetworkState object to view the network on that node. The output in this example has been redacted for clarity: USD oc get nns node01 -o yaml Example output apiVersion: nmstate.io/v1beta1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: ... interfaces: ... route-rules: ... routes: ... lastSuccessfulUpdateTime: "2020-01-31T12:14:00Z" 3 1 The name of the NodeNetworkState object is taken from the node. 2 The currentState contains the complete network configuration for the node, including DNS, interfaces, and routes. 3 Timestamp of the last successful update. This is updated periodically as long as the node is reachable and can be used to evalute the freshness of the report. 19.3. Updating node network configuration You can update the node network configuration, such as adding or removing interfaces from nodes, by applying NodeNetworkConfigurationPolicy manifests to the cluster. Warning When using OVN-Kubernetes, changing the default gateway interface is not supported. 19.3.1. About nmstate OpenShift Container Platform uses nmstate to report on and configure the state of the node network. This makes it possible to modify network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster. Node networking is monitored and updated by the following objects: NodeNetworkState Reports the state of the network on that node. NodeNetworkConfigurationPolicy Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster. NodeNetworkConfigurationEnactment Reports the network policies enacted upon each node. OpenShift Container Platform supports the use of the following nmstate interface types: Linux Bridge VLAN Bond Ethernet Note If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding to the default interface of a host because of a change in the host network topology of OVN-Kubernetes. As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider. 19.3.2. Creating an interface on nodes Create an interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The manifest details the requested configuration for the interface. By default, the manifest applies to all nodes in the cluster. To add the interface to specific nodes, add the spec: nodeSelector parameter and the appropriate <key>:<value> for your node selector. Procedure Create the NodeNetworkConfigurationPolicy manifest. The following example configures a Linux bridge on all worker nodes: apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 4 type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: eth1 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster. 4 Optional: Human-readable description for the interface. Create the node network policy: USD oc apply -f <br1-eth1-policy.yaml> 1 1 File name of the node network configuration policy manifest. Additional resources Example for creating multiple interfaces in the same policy Examples of different IP management methods in policies 19.3.3. Confirming node network policy updates on nodes A NodeNetworkConfigurationPolicy manifest describes your requested network configuration for nodes in the cluster. The node network policy includes your requested network configuration and the status of execution of the policy on the cluster as a whole. When you apply a node network policy, a NodeNetworkConfigurationEnactment object is created for every node in the cluster. The node network configuration enactment is a read-only object that represents the status of execution of the policy on that node. If the policy fails to be applied on the node, the enactment for that node includes a traceback for troubleshooting. Procedure To confirm that a policy has been applied to the cluster, list the policies and their status: USD oc get nncp Optional: If a policy is taking longer than expected to successfully configure, you can inspect the requested state and status conditions of a particular policy: USD oc get nncp <policy> -o yaml Optional: If a policy is taking longer than expected to successfully configure on all nodes, you can list the status of the enactments on the cluster: USD oc get nnce Optional: To view the configuration of a particular enactment, including any error reporting for a failed configuration: USD oc get nnce <node>.<policy> -o yaml 19.3.4. Removing an interface from nodes You can remove an interface from one or more nodes in the cluster by editing the NodeNetworkConfigurationPolicy object and setting the state of the interface to absent . Removing an interface from a node does not automatically restore the node network configuration to a state. If you want to restore the state, you will need to define that node network configuration in the policy. If you remove a bridge or bonding interface, any node NICs in the cluster that were previously attached or subordinate to that bridge or bonding interface are placed in a down state and become unreachable. To avoid losing connectivity, configure the node NIC in the same policy so that it has a status of up and either DHCP or a static IP address. Note Deleting the node network policy that added an interface does not change the configuration of the policy on the node. Although a NodeNetworkConfigurationPolicy is an object in the cluster, it only represents the requested configuration. Similarly, removing an interface does not delete the policy. Procedure Update the NodeNetworkConfigurationPolicy manifest used to create the interface. The following example removes a Linux bridge and configures the eth1 NIC with DHCP to avoid losing connectivity: apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster. 4 Changing the state to absent removes the interface. 5 The name of the interface that is to be unattached from the bridge interface. 6 The type of interface. This example creates an Ethernet networking interface. 7 The requested state for the interface. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. Update the policy on the node and remove the interface: USD oc apply -f <br1-eth1-policy.yaml> 1 1 File name of the policy manifest. 19.3.5. Example policy configurations for different interfaces 19.3.5.1. Example: Linux bridge interface node network configuration policy Create a Linux bridge interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for a Linux bridge interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a bridge. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 10 Disables stp in this example. 11 The node NIC to which the bridge attaches. 19.3.5.2. Example: VLAN interface node network configuration policy Create a VLAN interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for a VLAN interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a VLAN. 7 The requested state for the interface after creation. 8 The node NIC to which the VLAN is attached. 9 The VLAN tag. 19.3.5.3. Example: Bond interface node network configuration policy Create a bond interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. Note OpenShift Container Platform only supports the following bond modes: mode=1 active-backup mode=2 balance-xor mode=4 802.3ad mode=5 balance-tlb mode=6 balance-alb The following YAML file is an example of a manifest for a bond interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a bond. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 10 The driver mode for the bond. This example uses an active backup mode. 11 Optional: This example uses miimon to inspect the bond link every 140ms. 12 The subordinate node NICs in the bond. 13 Optional: The maximum transmission unit (MTU) for the bond. If not specified, this value is set to 1500 by default. 19.3.5.4. Example: Ethernet interface node network configuration policy Configure an Ethernet interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for an Ethernet interface. It includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates an Ethernet networking interface. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 19.3.5.5. Example: Multiple interfaces in the same node network configuration policy You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest. The following example snippet creates a bond that is named bond10 across two NICs and a Linux bridge that is named br1 that connects to the bond. #... interfaces: - name: bond10 description: Bonding eth2 and eth3 for Linux bridge type: bond state: up link-aggregation: port: - eth2 - eth3 - name: br1 description: Linux bridge on bond type: linux-bridge state: up bridge: port: - name: bond10 #... 19.3.6. Examples: IP management The following example configuration snippets demonstrate different methods of IP management. These examples use the ethernet interface type to simplify the example while showing the related context in the policy configuration. These IP management examples can be used with the other interface types. 19.3.6.1. Static The following snippet statically configures an IP address on the Ethernet interface: ... interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true ... 1 Replace this value with the static IP address for the interface. 19.3.6.2. No IP address The following snippet ensures that the interface has no IP address: ... interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false ... 19.3.6.3. Dynamic host configuration The following snippet configures an Ethernet interface that uses a dynamic IP address, gateway address, and DNS: ... interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true ... The following snippet configures an Ethernet interface that uses a dynamic IP address but does not use a dynamic gateway address or DNS: ... interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true ... 19.3.6.4. DNS The following snippet sets DNS configuration on the host. ... interfaces: ... dns-resolver: config: search: - example.com - example.org server: - 8.8.8.8 ... 19.3.6.5. Static routing The following snippet configures a static route and a static IP on interface eth1 . ... interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.0.2.251 1 prefix-length: 24 enabled: true routes: config: - destination: 198.51.100.0/24 metric: 150 -hop-address: 192.0.2.1 2 -hop-interface: eth1 table-id: 254 ... 1 The static IP address for the Ethernet interface. 2 hop address for the node traffic. This must be in the same subnet as the IP address set for the Ethernet interface. 19.4. Troubleshooting node network configuration If the node network configuration encounters an issue, the policy is automatically rolled back and the enactments report failure. This includes issues such as: The configuration fails to be applied on the host. The host loses connection to the default gateway. The host loses connection to the API server. 19.4.1. Troubleshooting an incorrect node network configuration policy configuration You can apply changes to the node network configuration across your entire cluster by applying a node network configuration policy. If you apply an incorrect configuration, you can use the following example to troubleshoot and correct the failed node network policy. In this example, a Linux bridge policy is applied to an example cluster that has three control plane nodes (master) and three compute (worker) nodes. The policy fails to be applied because it references an incorrect interface. To find the error, investigate the available NMState resources. You can then update the policy with the correct configuration. Procedure Create a policy and apply it to your cluster. The following example creates a simple bridge on the ens01 interface: apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01 USD oc apply -f ens01-bridge-testfail.yaml Example output nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created Verify the status of the policy by running the following command: USD oc get nncp The output shows that the policy failed: Example output NAME STATUS ens01-bridge-testfail FailedToConfigure However, the policy status alone does not indicate if it failed on all nodes or a subset of nodes. List the node network configuration enactments to see if the policy was successful on any of the nodes. If the policy failed for only a subset of nodes, it suggests that the problem is with a specific node configuration. If the policy failed on all nodes, it suggests that the problem is with the policy. USD oc get nnce The output shows that the policy failed on all nodes: Example output NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure View one of the failed enactments and look at the traceback. The following command uses the output tool jsonpath to filter the output: USD oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type=="Failing")].message}' This command returns a large traceback that has been edited for brevity: Example output error reconciling NodeNetworkConfigurationPolicy at desired state apply: , failed to execute nmstatectl set --no-commit --timeout 480: 'exit status 1' '' ... libnmstate.error.NmstateVerificationError: desired ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: ens01 description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 current ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: [] description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 difference ========== --- desired +++ current @@ -13,8 +13,7 @@ hello-time: 2 max-age: 20 priority: 32768 - port: - - name: ens01 + port: [] description: Linux bridge with the wrong port ipv4: address: [] line 651, in _assert_interfaces_equal\n current_state.interfaces[ifname],\nlibnmstate.error.NmstateVerificationError: The NmstateVerificationError lists the desired policy configuration, the current configuration of the policy on the node, and the difference highlighting the parameters that do not match. In this example, the port is included in the difference , which suggests that the problem is the port configuration in the policy. To ensure that the policy is configured properly, view the network configuration for one or all of the nodes by requesting the NodeNetworkState object. The following command returns the network configuration for the control-plane-1 node: The output shows that the interface name on the nodes is ens1 but the failed policy incorrectly uses ens01 : Example output - ipv4: ... name: ens1 state: up type: ethernet Correct the error by editing the existing policy: USD oc edit nncp ens01-bridge-testfail ... port: - name: ens1 Save the policy to apply the correction. Check the status of the policy to ensure it updated successfully: USD oc get nncp Example output NAME STATUS ens01-bridge-testfail SuccessfullyConfigured The updated policy is successfully configured on all nodes in the cluster.
[ "oc get nns", "oc get nns node01 -o yaml", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: interfaces: route-rules: routes: lastSuccessfulUpdateTime: \"2020-01-31T12:14:00Z\" 3", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 4 type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: eth1", "oc apply -f <br1-eth1-policy.yaml> 1", "oc get nncp", "oc get nncp <policy> -o yaml", "oc get nnce", "oc get nnce <node>.<policy> -o yaml", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9", "oc apply -f <br1-eth1-policy.yaml> 1", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9", "# interfaces: - name: bond10 description: Bonding eth2 and eth3 for Linux bridge type: bond state: up link-aggregation: port: - eth2 - eth3 - name: br1 description: Linux bridge on bond type: linux-bridge state: up bridge: port: - name: bond10 #", "interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true", "interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false", "interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true", "interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true", "interfaces: dns-resolver: config: search: - example.com - example.org server: - 8.8.8.8", "interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.0.2.251 1 prefix-length: 24 enabled: true routes: config: - destination: 198.51.100.0/24 metric: 150 next-hop-address: 192.0.2.1 2 next-hop-interface: eth1 table-id: 254", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01", "oc apply -f ens01-bridge-testfail.yaml", "nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created", "oc get nncp", "NAME STATUS ens01-bridge-testfail FailedToConfigure", "oc get nnce", "NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure", "oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type==\"Failing\")].message}'", "error reconciling NodeNetworkConfigurationPolicy at desired state apply: , failed to execute nmstatectl set --no-commit --timeout 480: 'exit status 1' '' libnmstate.error.NmstateVerificationError: desired ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: ens01 description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 current ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: [] description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 difference ========== --- desired +++ current @@ -13,8 +13,7 @@ hello-time: 2 max-age: 20 priority: 32768 - port: - - name: ens01 + port: [] description: Linux bridge with the wrong port ipv4: address: [] line 651, in _assert_interfaces_equal\\n current_state.interfaces[ifname],\\nlibnmstate.error.NmstateVerificationError:", "oc get nns control-plane-1 -o yaml", "- ipv4: name: ens1 state: up type: ethernet", "oc edit nncp ens01-bridge-testfail", "port: - name: ens1", "oc get nncp", "NAME STATUS ens01-bridge-testfail SuccessfullyConfigured" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/networking/kubernetes-nmstate
17.4.3.3. Binding and Redirection Options
17.4.3.3. Binding and Redirection Options The service configuration files for xinetd support binding the service to an IP address and redirecting incoming requests for that service to another IP address, hostname, or port. Binding is controlled with the bind option in the service-specific configuration files and links the service to one IP address on the system. Once configured, the bind option only allows requests for the proper IP address to access the service. In this way, different services can be bound to different network interfaces based on need. This is particularly useful for systems with multiple network adapters or with multiple IP addresses configured. On such a system, insecure services, like Telnet, can be configured to listen only on the interface connected to a private network and not to the interface connected with the Internet. The redirect option accepts an IP address or hostname followed by a port number. It configures the service to redirect any requests for this service to the specified host and port number. This feature can be used to point to another port number on the same system, redirect the request to different IP address on the same machine, shift the request to a totally different system and port number, or any combination of these options. In this way, a user connecting to certain service on a system may be rerouted to another system with no disruption. The xinetd daemon is able to accomplish this redirection by spawning a process that stays alive for the duration of the connection between the requesting client machine and the host actually providing the service, transferring data between the two systems. But the advantages of the bind and redirect options are most clearly evident when they are used together. By binding a service to a particular IP address on a system and then redirecting requests for this service to a second machine that only the first machine can see, an internal system can be used to provide services for a totally different network. Alternatively, these options can be used to limit the exposure of a particular service on a multi-homed machine to a known IP address, as well as redirect any requests for that service to another machine specially configured for that purpose. For example, consider a system that is used as a firewall with this setting for its Telnet service: The bind and redirect options in this file ensures that the Telnet service on the machine is bound to the external IP address (123.123.123.123), the one facing the Internet. In addition, any requests for Telnet service sent to 123.123.123.123 are redirected via a second network adapter to an internal IP address (10.0.1.13) that only the firewall and internal systems can access. The firewall then send the communication between the two systems, and the connecting system thinks it is connected to 123.123.123.123 when it is actually connected to a different machine. This feature is particularly useful for users with broadband connections and only one fixed IP address. When using Network Address Translation (NAT), the systems behind the gateway machine, which are using internal-only IP addresses, are not available from outside the gateway system. However, when certain services controlled by xinetd are configured with the bind and redirect options, the gateway machine can act as a proxy between outside systems and a particular internal machine configured to provide the service. In addition, the various xinetd access control and logging options are also available for additional protection.
[ "service telnet { socket_type = stream wait = no server = /usr/sbin/in.telnetd log_on_success += DURATION USERID log_on_failure += USERID bind = 123.123.123.123 redirect = 10.0.1.13 23 }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-tcpwrappers-xinetd-config-redirection
Chapter 6. Monitoring the cluster on the Ceph dashboard
Chapter 6. Monitoring the cluster on the Ceph dashboard As a storage administrator, you can use Red Hat Ceph Storage Dashboard to monitor specific aspects of the cluster based on types of hosts, services, data access methods, and more. This section covers the following administrative tasks: Monitoring hosts of the Ceph cluster on the dashboard . Viewing and editing the configuration of the Ceph cluster on the dashboard . Viewing and editing the manager modules of the Ceph cluster on the dashboard . Monitoring monitors of the Ceph cluster on the dashboard . Monitoring services of the Ceph cluster on the dashboard . Monitoring Ceph OSDs on the dashboard . Monitoring HAProxy on the dashboard . Viewing the CRUSH map of the Ceph cluster on the dashboard . Filtering logs of the Ceph cluster on the dashboard . Viewing centralized logs of the Ceph cluster on the dashboard . Monitoring pools of the Ceph cluster on the dashboard. Monitoring Ceph file systems on the dashboard. Monitoring Ceph Object Gateway daemons on the dashboard. Monitoring block device images on the Ceph dashboard. 6.1. Monitoring hosts of the Ceph cluster on the dashboard You can monitor the hosts of the cluster on the Red Hat Ceph Storage Dashboard. The following are the different tabs on the hosts page. Each tab contains a table with the relavent information. The tables are searchable and customizable by column and row. To change the order of the columns, select the column name and drag to place within the table. To select which columns are displaying, click the toggle columns button and select or clear column names. Enter the number of rows to be displayed in the row selector field. Devices This tab has a table that details the device ID, state of the device health, life expectancy, device name, prediction creation date, and the daemons on the hosts. Physical Disks This tab has a table that details all disks attached to a selected host, as well as their type, size and others. It has details such as device path, type of device, available, vendor, model, size, and the OSDs deployed. To identify which disk is where on the physical device, select the device and click Identify . Select the duration of how long the LED should blink for to find the selected disk. Daemons This tab has a table that details all services that have been deployed on the selected host, which container they are running in, and their current status. The table has details such as daemon name, daemon version, status, when the daemon was last refreshed, CPU usage, memory usage (in MiB), and daemon events. Daemon actions can be run from this tab. For more details, see Daemon actions . Performance Details This tab has details such as OSDs deployed, CPU utilization, RAM usage, network load, network drop rate, and OSD disk performance statistics. View performance information through the embedded Grafana Dashboard. Device health For SMART-enabled devices, you can get the individual health status and SMART data only on the OSD deployed hosts. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Hosts are added to the storage cluster. All the services, monitor, manager, and OSD daemons are deployed on the storage cluster. Procedure From the dashboard navigation, go to Cluster->Hosts . On the Hosts List tab, expand the host row and select the host with the daemon to perform the action on. On the Daemons tab of the host, select the row with the daemon. Note The Daemons table can be searched and filtered. Select the action that needs to be run on the daemon. The options are Start , Stop , Restart , and Redeploy . Figure 6.1. Monitoring hosts of the Ceph cluster Additional Resources See the Ceph performance counters in the Red Hat Ceph Storage Administration Guide for more details. 6.2. Viewing and editing the configuration of the Ceph cluster on the dashboard You can view various configuration options of the Ceph cluster on the dashboard. You can edit only some configuration options. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. All the services are deployed on the storage cluster. Procedure From the dashboard navigation, go to Administration->Configuration . To view the details of the configuration, expand the row contents. Figure 6.2. Configuration options Optional: Use the search field to find a configuration. Optional: You can filter for a specific configuration. Use the following filters: Level - Basic, advanced, or dev Service - Any, mon, mgr, osd, mds, common, mds_client, rgw, and similar filters. Source - Any, mon, and similar filters Modified - yes or no To edit a configuration, select the configuration row and click Edit . Use the Edit form to edit the required parameters, and click Update . A notification displays that the configuration was updated successfully. Additional Resources See the Ceph Network Configuration chapter in the Red Hat Ceph Storage Configuration Guide for more details. 6.3. Viewing and editing the manager modules of the Ceph cluster on the dashboard Manager modules are used to manage module-specific configuration settings. For example, you can enable alerts for the health of the cluster. You can view, enable or disable, and edit the manager modules of a cluster on the Red Hat Ceph Storage dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Viewing the manager modules From the dashboard navigation, go to Administration->Manager Modules . To view the details of a specific manager module, expand the row contents. Figure 6.3. Manager modules Enabling a manager module Select the row and click Enable from the action drop-down. Disabling a manager module Select the row and click Disable from the action drop-down. Editing a manager module Select the row: Note Not all modules have configurable parameters. If a module is not configurable, the Edit button is disabled. Edit the required parameters and click Update . A notification displays that the module was updated successfully. 6.4. Monitoring monitors of the Ceph cluster on the dashboard You can monitor the performance of the Ceph monitors on the landing page of the Red Hat Ceph Storage dashboard You can also view the details such as status, quorum, number of open session, and performance counters of the monitors in the Monitors panel. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Monitors are deployed in the storage cluster. Procedure From the dashboard navigation, go to Cluster->Monitors . The Monitors panel displays information about the overall monitor status and monitor hosts that are in and out of quorum. To see the number of open sessions, hover the cursor over the Open Sessions . To see performance counters for any monitor, click Name in the In Quorum and Not In Quorum tables. Figure 6.4. Viewing monitor Performance Counters Additional Resources See the Ceph monitors section in the Red Hat Ceph Storage Operations guide . See the Ceph performance counters in the Red Hat Ceph Storage Administration Guide for more details. 6.5. Monitoring services of the Ceph cluster on the dashboard You can monitor the services of the cluster on the Red Hat Ceph Storage Dashboard. You can view the details such as hostname, daemon type, daemon ID, container ID, container image name, container image ID, version status and last refreshed time. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Hosts are added to the storage cluster. All the services are deployed on the storage cluster. Procedure From the dashboard navigation, go to Administration->Services . Expand the service for more details. Figure 6.5. Monitoring services of the Ceph cluster Additional Resources See the Introduction to the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide for more details. 6.6. Monitoring Ceph OSDs on the dashboard You can monitor the status of the Ceph OSDs on the landing page of the Red Hat Ceph Storage Dashboard. You can also view the details such as host, status, device class, number of placement groups (PGs), size flags, usage, and read or write operations time in the OSDs tab. The following are the different tabs on the OSDs page: Devices - This tab has details such as Device ID, state of health, life expectancy, device name, and the daemons on the hosts. Attributes (OSD map) - This tab shows the cluster address, details of heartbeat, OSD state, and the other OSD attributes. Metadata - This tab shows the details of the OSD object store, the devices, the operating system, and the kernel details. Device health - For SMART-enabled devices, you can get the individual health status and SMART data. Performance counter - This tab gives details of the bytes written on the devices. Performance Details - This tab has details such as OSDs deployed, CPU utilization, RAM usage, network load, network drop rate, and OSD disk performance statistics. View performance information through the embedded Grafana Dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Hosts are added to the storage cluster. All the services including OSDs are deployed on the storage cluster. Procedure From the dashboard navigation, go to Cluster->OSDs . To view the details of a specific OSD, from the OSDs List tab, expand an OSD row. Figure 6.6. Monitoring OSDs of the Ceph cluster You can view additional details such as Devices , Attributes (OSD map) , Metadata , Device Health , Performance counter , and Performance Details , by clicking on the respective tabs. Additional Resources See the Introduction to the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide for more details. 6.7. Monitoring HAProxy on the dashboard The Ceph Object Gateway allows you to assign many instances of the object gateway to a single zone, so that you can scale out as load increases. Since each object gateway instance has its own IP address, you can use HAProxy to balance the load across Ceph Object Gateway servers. You can monitor the following HAProxy metrics on the dashboard: Total responses by HTTP code. Total requests/responses. Total number of connections. Current total number of incoming / outgoing bytes. You can also get the Grafana details by running the ceph dashboard get-grafana-api-url command. Prerequisites A running Red Hat Ceph Storage cluster. Admin level access on the storage dashboard. An existing Ceph Object Gateway service, without SSL. If you want SSL service, the certificate should be configured on the ingress service, not the Ceph Object Gateway service. Ingress service deployed using the Ceph Orchestrator. Monitoring stack components are created on the dashboard. Procedure Log in to the Grafana URL and select the RGW_Overview panel: Syntax Example Verify the HAProxy metrics on the Grafana URL. From the Ceph dashboard navigation, go to Object->Gateways . From the Overall Performance tab, verify the Ceph Object Gateway HAProxy metrics. Figure 6.7. HAProxy metrics Additional Resources See the Configuring high availability for the Ceph Object Gateway in the Red Hat Ceph Storage Object Gateway Guide for more details. 6.8. Viewing the CRUSH map of the Ceph cluster on the dashboard You can view the The CRUSH map that contains a list of OSDs and related information on the Red Hat Ceph Storage dashboard. Together, the CRUSH map and CRUSH algorithm determine how and where data is stored. The dashboard allows you to view different aspects of the CRUSH map, including OSD hosts, OSD daemons, ID numbers, device class, and more. The CRUSH map allows you to determine which host a specific OSD ID is running on. This is helpful if there is an issue with an OSD. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. OSD daemons deployed on the storage cluster. Procedure From the dashboard navigation, go to Cluster->CRUSH map . To view the details of the specific OSD, click it's row. Figure 6.8. CRUSH Map detail view Additional Resources For more information about the CRUSH map, see CRUSH admin overview in the Red Hat Ceph Storage Storage strategies guide . 6.9. Filtering logs of the Ceph cluster on the dashboard You can view and filter logs of the Red Hat Ceph Storage cluster on the dashboard based on several criteria. The criteria includes Priority , Keyword , Date , and Time range . You can download the logs to the system or copy the logs to the clipboard as well for further analysis. Prerequisites A running Red Hat Ceph Storage cluster. The Dashboard is installed. Log entries have been generated since the Ceph Monitor was last started. Note The Dashboard logging feature only displays the thirty latest high level events. The events are stored in memory by the Ceph Monitor. The entries disappear after restarting the Monitor. If you need to review detailed or older logs, refer to the file based logs. Procedure From the dashboard navigation, go to Observability->Logs . From the Cluster Logs tab, view cluster logs. Figure 6.9. Cluster logs Use the Priority filter to filter by Debug , Info , Warning , Error , or All . Use the Keyword field to enter text to search by keyword. Use the Date picker to filter by a specific date. Use the Time range fields to enter a range, using the HH:MM - HH:MM format. Hours must be entered using numbers 0 to 23 . To combine filters, set two or more filters. To save the logs, use the Download or Copy to Clipboard buttons. Additional Resources See the Configuring Logging chapter in the Red Hat Ceph StorageTroubleshooting Guide for more information. See the Understanding Ceph Logs section in the Red Hat Ceph Storage Troubleshooting Guide for more information. 6.10. Viewing centralized logs of the Ceph cluster on the dashboard Ceph Dashboard allows you to view logs from all the clients in a centralized space in the Red Hat Ceph Storage cluster for efficient monitoring. This is achieved through using Loki, a log aggregation system designed to store and query logs, and Promtail, an agent that ships the contents of local logs to a private Grafana Loki instance. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Grafana is configured and logged into on the cluster. Procedure From the dashboard navigation, go to Administration->Services . From Services , click Create . In the Create Service form, from the Type list, select loki . Fill in the remaining details, and click Create Service . Repeat the step to create the Promtail service. Select promtail from the Type list. The loki and promtail services are displayed in the Services table, after being created successfully. Figure 6.10. Creating Loki and Promtail services Note By default, Promtail service is deployed on all the running hosts. Enable logging to files. Go to Administration->Configuration . Select log_to_file and click Edit . In the Edit log_to_file form, set the global value to true . Figure 6.11. Configuring log files Click Update . The Updated config option log_to_file notification displays and you are returned to the Configuration table. Repeat these steps for mon_cluster_log_to_file , setting the global value to true . Note Both log_to_file and mon_cluster_log_to_file files need to be configured. View the centralized logs. Go to Observability->Logs and switch to the Daemon Logs tab. Use Log browser to select files and click Show logs to view the logs from that file. Figure 6.12. View centralized logs Note If you do not see the logs, you need to sign in to Grafana and reload the page. 6.11. Monitoring pools of the Ceph cluster on the dashboard You can view the details, performance details, configuration, and overall performance of the pools in a cluster on the Red Hat Ceph Storage Dashboard. A pool plays a critical role in how the Ceph storage cluster distributes and stores data. If you have deployed a cluster without creating a pool, Ceph uses the default pools for storing data. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Pools are created Procedure From the dashboard navigation, go to Cluster->Pools . View the Pools List tab, which gives the details of Data protection and the application for which the pool is enabled. Hover the mouse over Usage , Read bytes , and Write bytes for the required details. Expand the pool row for detailed information about a specific pool. Figure 6.13. Monitoring pools For general information, go to the Overall Performance tab. Additional Resources For more information about pools, see Ceph pools in the Red Hat Ceph Storage Architecture guide . See the Creating pools on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 6.12. Monitoring Ceph File Systems on the dashboard You can use the Red Hat Ceph Storage Dashboard to monitor Ceph File Systems (CephFS) and related components. For each File System listed, the following tabs are available: Details View the metadata servers (MDS) and their rank plus any standby daemons, pools and their usage,and performance counters. Directories View list of directories, their quotas and snapshots. Select a directory to set and unset maximum file and size quotas and to create and delete snapshots for the specific directory. Subvolumes Create, edit, and view subvolume information. These can be filtered by subvolume groups. Subvolume groups Create, edit, and view subvolume group information. Snapshots Create, clone, and view snapshot information. These can be filtered by subvolume groups and subvolumes. Snapshot schedules Enable, create, edit, and delete snapshot schedules. Clients View and evict Ceph File System client information. Performance Details View the performance of the file systems through the embedded Grafana Dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. MDS service is deployed on at least one of the hosts. Ceph File System is installed. Procedure From the dashboard navigation, go to File->File Systems . To view more information about an individual file system, expand the file system row. Additional Resources For more information, see the File System Guide . 6.13. Monitoring Ceph object gateway daemons on the dashboard You can use the Red Hat Ceph Storage Dashboard to monitor Ceph object gateway daemons. You can view the details, performance counters, and performance details of the Ceph object gateway daemons. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. At least one Ceph object gateway daemon configured in the storage cluster. Procedure From the dashboard navigation, go to Object->Gateways . View information about individual gateways, from the Gateways List tab. To view more information about an individual gateway, expand the gateway row. If you have configured multiple Ceph Object Gateway daemons, click on Sync Performance tab and view the multi-site performance counters. Additional Resources For more information, see the Red Hat Ceph Storage Ceph object gateway Guide . 6.14. Monitoring Block Device images on the Ceph dashboard You can use the Red Hat Ceph Storage Dashboard to monitor and manage Block device images. You can view the details, snapshots, configuration details, and performance details of the images. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. A pool with the rbd application enabled is created. An image is created. Procedure From the dashboard navigation, go to Block->Images . Expand the image row to see detailed information. Figure 6.14. Monitoring Block device images Additional Resources See the Creating images on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. .
[ "https:// DASHBOARD_URL :3000", "https://dashboard_url:3000" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/dashboard_guide/monitor-the-cluster-on-the-ceph-dashboard
Chapter 3. Configuring Red Hat Single Sign-On
Chapter 3. Configuring Red Hat Single Sign-On Red Hat Single Sign-On (RH-SSO) supports multi-tenancy, and uses realms to allow for separation between tenants. As a result RH-SSO operations always occur within the context of a realm. You can either create the realm manually, or with the keycloak-httpd-client-install tool if you have administrative privileges on the RH-SSO server. Prerequisites You must have a fully installed RH-SSO server. For more information on installing RH-SSO, see Server installation and configuration guide . You need definitions for the following variables as they appear below: <_RH_RHSSO_URL_> The Red Hat Single Sign-On URL <_FED_RHSSO_REALM_> Identifies the RH-SSO realm in use 3.1. Configuring the RH-SSO realm When the Red Hat Single Sign-On (RH-SSO) realm is available, use the RH-SSO web console to configure the realm for user federation against IdM: Procedure From the drop-down list in the uppper left corner, select your RH-SSO realm. From the Configure panel, select User Federation . From the Add provider drop-down list in the User Federation panel, select ldap . Provide values for the following parameters. Substitute all site-specific values with values relevant to your environment. Property Value Console Display Name Red Hat IDM Edit Mode READ_ONLY Sync Registrations Off Vendor Red Hat Directory Server Username LDAP attribute uid RDN LDAP attribute uid UUID LDAP attribute ipaUniqueID User Object Classes inetOrgPerson, organizationalPerson Connection URL LDAPS://<_FED_IPA_HOST_> Users DN cn=users,cn=accounts,<_FED_IPA_BASE_DN_> Authentication Type simple Bind DN uid=rhsso,cn=sysaccounts,cn=etc,<_FED_IPA_BASE_DN_> Bind Credential <_FED_IPA_RHSSO_SERVICE_PASSWD_> Use the Test connection and Test authentication buttons to ensure that user federation is working. Click Save to save the new user federation provider. Click the Mappers tab at the top of the Red Hat IdM user federation page you created. Create a mapper to retrieve the user group information. A user's group membership returns the SAM assertion. Use group membership later to provide authorization in OpenStack. Click Create in the Mappers page. On the Add user federation mapper page, select group-ldap-mapper from the Mapper Type drop-down list, and name it Group Mapper . Provide values for the following parameters. Substitute all site-specific values with values relevant to your environment. Property Value LDAP Groups DN cn=groups,cn=accounts„<_FED_IPA_BASE_DN_> Group Name LDAP Attribute cn Group Object Classes groupOfNames Membership LDAP Attribute member Membership Attribute Type DN Mode READ_ONLY User Groups Retrieve Strategy GET_GROUPS_FROM_USER_MEMBEROF_ATTRIBUTE Click Save . 3.2. Adding user attributes using SAML assertion Security Assertion Markup Language (SAML) is an open standard that allows the communication of user attributes and authorization credentials between the identity provider (IdP) and a service provider (SP). You can configure Red Hat Single Sign-On (RH-SSO) to return the attributes that you require in the assertion. When the OpenStack Identity service receives the SAML assertion, it maps those attributes onto OpenStack users. The process of mapping IdP attributes into Identity Service data is called Federated Mapping. For more information, see Section 4.20, "Create the Mapping File and Upload to Keystone" . Use the following process to add attributes to SAML: Procedure In the RH-SSO administration web console, select <_FED_RHSSO_REALM_> from the drop-down list in the upper left corner. Select Clients from the Configure panel. Select the service provider client that keycloak-httpd-client-install configured. You can identify the client with the SAML EntityId . Select the mappers tab from the horizontal list of tabs. In the Mappers panel, select Create or Add Builtin to add a protocol mapper to the client. You can add additional attributes, but you only need the list of groups for which the user is a member. Group membership is how you authorize the user. 3.3. Adding group information to the SAML assertion Procedure Click the Create button in the Mappers Panel. In the Create Protocol Mapper panel, select Group list from the Mapper tpe drop-down list. Enter Group List as a name in the Name field. Enter groups as the name of the SAML attribute in the Group attribute Name field. Note This is the name of the attribute as it appears in the SAML assertion. When the keystone mapper searches for names in the Remote section of the mapping declaration, it searches for the SAML attribute name. When you add an attribute in RH-SSO to be passed in the assertion, specify the SAML attribute name. You define the name in the RH-SSO protocol mapper. In the SAML Attribute NameFormat parameter, select Basic . In the Single Group Attribute toggle box, select On . Click Save . Note When you run the keycloak-httpd-client-install tool, the process adds a group mapper.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/federate_with_identity_service/configuring_red_hat_single_sign_on
2.3. Install OpenJDK on Red Hat Enterprise Linux
2.3. Install OpenJDK on Red Hat Enterprise Linux Procedure 2.1. Install OpenJDK on Red Hat Enterprise Linux Subscribe to the Base Channel Obtain the OpenJDK from the RHN base channel. Your installation of Red Hat Enterprise Linux is subscribed to this channel by default. Install the Package Use the yum utility to install OpenJDK: Verify that OpenJDK is the System Default Ensure that the correct JDK is set as the system default as follows: Log in as a user with root privileges and run the alternatives command: Depending on the OpenJDK version, select /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java or /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/bin/java . Use the following command to set javac : Depending on the OpenJDK version used, select /usr/lib/jvm/java-1.6.0-openjdk/bin/java or /usr/lib/jvm/java-1.7.0-openjdk/bin/java . 23154%2C+Getting+Started+Guide-6.608-09-2016+09%3A22%3A31JBoss+Data+Grid+6Documentation6.6.1 Report a bug
[ "sudo yum install java-1.6.0-openjdk-devel", "/usr/sbin/alternatives --config java", "/usr/sbin/alternatives --config javac" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/install_openjdk_on_red_hat_enterprise_linux1
Chapter 1. Setting up an Argo CD instance
Chapter 1. Setting up an Argo CD instance By default, the Red Hat OpenShift GitOps installs an instance of Argo CD in the openshift-gitops namespace with additional permissions for managing certain cluster-scoped resources. To manage cluster configurations or deploy applications, you can install and deploy a new Argo CD instance. By default, any new instance has permissions to manage resources only in the namespace where it is deployed. 1.1. Installing an Argo CD instance To manage cluster configurations or deploy applications, you can install and deploy a new Argo CD instance. Prerequisites You have access to the cluster with cluster-admin privileges. You have installed the Red Hat OpenShift GitOps Operator in your cluster. Procedure Log in to the OpenShift Container Platform web console. In the Administrator perspective of the web console, click Operators Installed Operators . Create or select the project where you want to install the Argo CD instance from the Project drop-down menu. Select OpenShift GitOps Operator from the installed operators list and click the Argo CD tab. Click Create ArgoCD to configure the parameters: Enter the Name of the instance. By default, the Name is set to example . Create an external OS Route to access Argo CD server. Click Server Route and check Enabled . Optional: You can also configure YAML for creating an external OS Route by adding the following configuration: Example Argo CD with external OS route created apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example namespace: openshift-gitops spec: server: route: enabled: true Go to Networking Routes <instance_name>-server in the project where the Argo CD instance is installed. On the Details tab, click the Argo CD web UI link under Route details Location . The Argo CD web UI opens in a separate browser window. Optional: To log in with your OpenShift Container Platform credentials, ensure you are a user of the cluster-admins group and then select the LOG IN VIA OPENSHIFT option in the Argo CD user interface. Note To be a user of the cluster-admins group, use the oc adm groups new cluster-admins <user> command, where <user> is the default cluster role that you can bind to users and groups cluster-wide or locally. Obtain the password for the Argo CD instance: Use the navigation panel to go to the Workloads Secrets page. Use the Project drop-down list and select the namespace where the Argo CD instance is created. Select the <argo_CD_instance_name>-cluster instance to display the password. On the Details tab, copy the password under Data admin.password . Use admin as the Username and the copied password as the Password to log in to the Argo CD UI in the new window. 1.2. Enabling replicas for Argo CD server and repo server Argo CD-server and Argo CD-repo-server workloads are stateless. To better distribute your workloads among pods, you can increase the number of Argo CD-server and Argo CD-repo-server replicas. However, if a horizontal autoscaler is enabled on the Argo CD-server, it overrides the number of replicas you set. Procedure Set the replicas parameters for the repo and server spec to the number of replicas you want to run: Example Argo CD custom resource apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: repo spec: repo: replicas: <number_of_replicas> server: replicas: <number_of_replicas> route: enabled: true path: / tls: insecureEdgeTerminationPolicy: Redirect termination: passthrough wildcardPolicy: None 1.3. Deploying resources to a different namespace To allow Argo CD to manage resources in other namespaces apart from where it is installed, configure the target namespace with a argocd.argoproj.io/managed-by label. Procedure Configure the namespace: USD oc label namespace <namespace> \ argocd.argoproj.io/managed-by=<namespace> 1 1 The namespace where Argo CD is installed. 1.4. Customizing the Argo CD console link In a multi-tenant cluster, users might have to deal with multiple instances of Argo CD. For example, after installing an Argo CD instance in your namespace, you might find a different Argo CD instance attached to the Argo CD console link, instead of your own Argo CD instance, in the Console Application Launcher. You can customize the Argo CD console link by setting the DISABLE_DEFAULT_ARGOCD_CONSOLELINK environment variable: When you set DISABLE_DEFAULT_ARGOCD_CONSOLELINK to true , the Argo CD console link is permanently deleted. When you set DISABLE_DEFAULT_ARGOCD_CONSOLELINK to false or use the default value, the Argo CD console link is temporarily deleted and visible again when the Argo CD route is reconciled. Prerequisites You have logged in to the OpenShift Container Platform cluster as an administrator. You have installed the Red Hat OpenShift GitOps Operator. Procedure In the Administrator perspective, navigate to Administration CustomResourceDefinitions . Find the Subscription CRD and click to open it. Select the Instances tab and click the openshift-gitops-operator subscription. Select the YAML tab and make your customization: To enable or disable the Argo CD console link, edit the value of DISABLE_DEFAULT_ARGOCD_CONSOLELINK as needed: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator spec: config: env: - name: DISABLE_DEFAULT_ARGOCD_CONSOLELINK value: 'true'
[ "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example namespace: openshift-gitops spec: server: route: enabled: true", "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: repo spec: repo: replicas: <number_of_replicas> server: replicas: <number_of_replicas> route: enabled: true path: / tls: insecureEdgeTerminationPolicy: Redirect termination: passthrough wildcardPolicy: None", "oc label namespace <namespace> argocd.argoproj.io/managed-by=<namespace> 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator spec: config: env: - name: DISABLE_DEFAULT_ARGOCD_CONSOLELINK value: 'true'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.11/html/argo_cd_instance/setting-up-argocd-instance
Chapter 10. Removing Windows nodes
Chapter 10. Removing Windows nodes You can remove a Windows node by deleting its host Windows machine. 10.1. Deleting a specific machine You can delete a specific machine. Important Do not delete a control plane machine unless your cluster uses a control plane machine set. Prerequisites Install an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure View the machines that are in the cluster by running the following command: USD oc get machine -n openshift-machine-api The command output contains a list of machines in the <clusterid>-<role>-<cloud_region> format. Identify the machine that you want to delete. Delete the machine by running the following command: USD oc delete machine <machine> -n openshift-machine-api Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. If the machine that you delete belongs to a machine set, a new machine is immediately created to satisfy the specified number of replicas.
[ "oc get machine -n openshift-machine-api", "oc delete machine <machine> -n openshift-machine-api" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/windows_container_support_for_openshift/removing-windows-nodes
A.4. tuned
A.4. tuned Tuned is a tuning daemon that can adapt the operating system to perform better under certain workloads by setting a tuning profile. It can also be configured to react to changes in CPU and network use and adjusts settings to improve performance in active devices and reduce power consumption in inactive devices. To configure dynamic tuning behavior, edit the dynamic_tuning parameter in the /etc/tuned/tuned-main.conf file. Tuned then periodically analyzes system statistics and uses them to update your system tuning settings. You can configure the time interval in seconds between these updates with the update_interval parameter. For further details about tuned, see the man page:
[ "man tuned" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-tuned
Monitoring and managing system status and performance
Monitoring and managing system status and performance Red Hat Enterprise Linux 9 Optimizing system throughput, latency, and power consumption Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/index
Kafka configuration tuning
Kafka configuration tuning Red Hat Streams for Apache Kafka 2.9 Use Kafka configuration properties to optimize the streaming of data
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_tuning/index
Chapter 6. HostFirmwareSettings [metal3.io/v1alpha1]
Chapter 6. HostFirmwareSettings [metal3.io/v1alpha1] Description HostFirmwareSettings is the Schema for the hostfirmwaresettings API Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HostFirmwareSettingsSpec defines the desired state of HostFirmwareSettings status object HostFirmwareSettingsStatus defines the observed state of HostFirmwareSettings 6.1.1. .spec Description HostFirmwareSettingsSpec defines the desired state of HostFirmwareSettings Type object Required settings Property Type Description settings integer-or-string Settings are the desired firmware settings stored as name/value pairs. 6.1.2. .status Description HostFirmwareSettingsStatus defines the observed state of HostFirmwareSettings Type object Required settings Property Type Description conditions array Track whether settings stored in the spec are valid based on the schema conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } lastUpdated string Time that the status was last updated schema object FirmwareSchema is a reference to the Schema used to describe each FirmwareSetting. By default, this will be a Schema in the same Namespace as the settings but it can be overwritten in the Spec settings object (string) Settings are the firmware settings stored as name/value pairs 6.1.3. .status.conditions Description Track whether settings stored in the spec are valid based on the schema Type array 6.1.4. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 6.1.5. .status.schema Description FirmwareSchema is a reference to the Schema used to describe each FirmwareSetting. By default, this will be a Schema in the same Namespace as the settings but it can be overwritten in the Spec Type object Required name namespace Property Type Description name string name is the reference to the schema. namespace string namespace is the namespace of the where the schema is stored. 6.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/hostfirmwaresettings GET : list objects of kind HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings DELETE : delete collection of HostFirmwareSettings GET : list objects of kind HostFirmwareSettings POST : create HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name} DELETE : delete HostFirmwareSettings GET : read the specified HostFirmwareSettings PATCH : partially update the specified HostFirmwareSettings PUT : replace the specified HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name}/status GET : read status of the specified HostFirmwareSettings PATCH : partially update status of the specified HostFirmwareSettings PUT : replace status of the specified HostFirmwareSettings 6.2.1. /apis/metal3.io/v1alpha1/hostfirmwaresettings Table 6.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind HostFirmwareSettings Table 6.2. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettingsList schema 401 - Unauthorized Empty 6.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings Table 6.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of HostFirmwareSettings Table 6.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HostFirmwareSettings Table 6.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.8. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettingsList schema 401 - Unauthorized Empty HTTP method POST Description create HostFirmwareSettings Table 6.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.10. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 6.11. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 202 - Accepted HostFirmwareSettings schema 401 - Unauthorized Empty 6.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name} Table 6.12. Global path parameters Parameter Type Description name string name of the HostFirmwareSettings namespace string object name and auth scope, such as for teams and projects Table 6.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete HostFirmwareSettings Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.15. Body parameters Parameter Type Description body DeleteOptions schema Table 6.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HostFirmwareSettings Table 6.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.18. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HostFirmwareSettings Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body Patch schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HostFirmwareSettings Table 6.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.23. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 6.24. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 401 - Unauthorized Empty 6.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name}/status Table 6.25. Global path parameters Parameter Type Description name string name of the HostFirmwareSettings namespace string object name and auth scope, such as for teams and projects Table 6.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified HostFirmwareSettings Table 6.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.28. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HostFirmwareSettings Table 6.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.30. Body parameters Parameter Type Description body Patch schema Table 6.31. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HostFirmwareSettings Table 6.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.33. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 6.34. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/provisioning_apis/hostfirmwaresettings-metal3-io-v1alpha1
RHEL for SAP Subscriptions and Repositories
RHEL for SAP Subscriptions and Repositories Red Hat Enterprise Linux for SAP Solutions 9 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/rhel_for_sap_subscriptions_and_repositories/index
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/integrate_openstack_identity_with_external_user_management_services/proc_providing-feedback-on-red-hat-documentation
function::pp
function::pp Name function::pp - Returns the active probe point Synopsis Arguments None Description This function returns the fully-resolved probe point associated with a currently running probe handler, including alias and wild-card expansion effects. Context: The current probe point.
[ "pp:string()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-pp
Chapter 10. Configuring custom SSL/TLS certificates for Red Hat Quay on OpenShift Container Platform
Chapter 10. Configuring custom SSL/TLS certificates for Red Hat Quay on OpenShift Container Platform When Red Hat Quay is deployed on OpenShift Container Platform, the tls component of the QuayRegistry custom resource definition (CRD) is set to managed by default. As a result, OpenShift Container Platform's Certificate Authority is used to create HTTPS endpoints and to rotate SSL/TLS certificates. You can configure custom SSL/TLS certificates before or after the initial deployment of Red Hat Quay on OpenShift Container Platform. This process involves creating or updating the configBundleSecret resource within the QuayRegistry YAML file to integrate your custom certificates and setting the tls component to unmanaged . Important When configuring custom SSL/TLS certificates for Red Hat Quay, administrators are responsible for certificate rotation. The following procedures enable you to apply custom SSL/TLS certificates to ensure secure communication and meet specific security requirements for your Red Hat Quay on OpenShift Container Platform deployment. These steps assumed you have already created a Certificate Authority (CA) bundle or an ssl.key , and an ssl.cert . The procedure then shows you how to integrate those files into your Red Hat Quay on OpenShift Container Platform deployment, which ensures that your registry operates with the specified security settings and conforms to your organization's SSL/TLS policies. Note The following procedure is used for securing Red Hat Quay with an HTTPS certificate. Note that this differs from managing Certificate Authority Trust Bundles. CA Trust Bundles are used by system processes within the Quay container to verify certificates against trusted CAs, and ensure that services like LDAP, storage backend, and OIDC connections are trusted. If you are adding the certificates to an existing deployment, you must include the existing config.yaml file in the new config bundle secret, even if you are not making any configuration changes. 10.1. Creating a Certificate Authority Use the following procedure to set up your own CA and use it to issue a server certificate for your domain. This allows you to secure communications with SSL/TLS using your own certificates. Procedure Generate the root CA key by entering the following command: USD openssl genrsa -out rootCA.key 2048 Generate the root CA certificate by entering the following command: USD openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Generate the server key by entering the following command: USD openssl genrsa -out ssl.key 2048 Generate a signing request by entering the following command: USD openssl req -new -key ssl.key -out ssl.csr Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []: Create a configuration file openssl.cnf , specifying the server hostname, for example: Example openssl.cnf file [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112 Use the configuration file to generate the certificate ssl.cert : USD openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf Confirm your created certificates and files by entering the following command: USD ls /path/to/certificates Example output rootCA.key ssl-bundle.cert ssl.key custom-ssl-config-bundle-secret.yaml rootCA.pem ssl.cert openssl.cnf rootCA.srl ssl.csr 10.2. Creating a custom SSL/TLS configBundleSecret resource After creating your custom SSL/TLS certificates, you can create a custom configBundleSecret resource for Red Hat Quay on OpenShift Container Platform, which allows you to upload ssl.cert and ssl.key files. Prerequisites You have base64 decoded the original config bundle into a config.yaml file. For more information, see Downloading the existing configuration . You have generated custom SSL certificates and keys. Procedure Create a new YAML file, for example, custom-ssl-config-bundle-secret.yaml : USD touch custom-ssl-config-bundle-secret.yaml Create the custom-ssl-config-bundle-secret resource. Create the resource by entering the following command: USD oc -n <namespace> create secret generic custom-ssl-config-bundle-secret \ --from-file=config.yaml=</path/to/config.yaml> \ 1 --from-file=ssl.cert=</path/to/ssl.cert> \ 2 --from-file=extra_ca_cert_<name-of-certificate>.crt=ca-certificate-bundle.crt \ 3 --from-file=ssl.key=</path/to/ssl.key> \ 4 --dry-run=client -o yaml > custom-ssl-config-bundle-secret.yaml 1 Where <config.yaml> is your base64 decoded config.yaml file. 2 Where <ssl.cert> is your ssl.cert file. 3 Optional. The --from-file=extra_ca_cert_<name-of-certificate>.crt=ca-certificate-bundle.crt field allows Red Hat Quay to recognize custom Certificate Authority (CA) files. If you are using LDAP, OIDC, or another service that uses custom CAs, you must add them via the extra_ca_cert path. For more information, see "Adding additional Certificate Authorities to Red Hat Quay on OpenShift Container Platform." 4 Where <ssl.key> is your ssl.key file. Optional. You can check the content of the custom-ssl-config-bundle-secret.yaml file by entering the following command: USD cat custom-ssl-config-bundle-secret.yaml Example output apiVersion: v1 data: config.yaml: QUxMT1dfUFVMTFNfV0lUSE9VVF9TVFJJQ1RfTE9HR0lORzogZmFsc2UKQVVUSEVOVElDQVRJT05fVFlQRTogRGF0YWJhc2UKREVGQVVMVF9UQUdfRVhQSVJBVElPTjogMncKRElTVFJJQlVURURfU1R... ssl.cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lVTUFBRk1YVWlWVHNoMGxNTWI3U1l0eFV5eTJjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZZ3hDekFKQmdOVkJBWVR... extra_ca_cert_<name-of-certificate>:LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lVTUFBRk1YVWlWVHNoMGxNTWI3U1l0eFV5eTJjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZZ3hDe... ssl.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2c0VWxZOVV1SVJPY1oKcFhpZk9MVEdqaS9neUxQMlpiMXQ... kind: Secret metadata: creationTimestamp: null name: custom-ssl-config-bundle-secret namespace: <namespace> Create the configBundleSecret resource by entering the following command: USD oc create -n <namespace> -f custom-ssl-config-bundle-secret.yaml Example output secret/custom-ssl-config-bundle-secret created Update the QuayRegistry YAML file to reference the custom-ssl-config-bundle-secret object by entering the following command: USD oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"configBundleSecret":"custom-ssl-config-bundle-secret"}}' Example output quayregistry.quay.redhat.com/example-registry patched Set the tls component of the QuayRegistry YAML to false by entering the following command: USD oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"components":[{"kind":"tls","managed":false}]}}' Example output quayregistry.quay.redhat.com/example-registry patched Ensure that your QuayRegistry YAML file has been updated to use the custom SSL configBundleSecret resource, and that your and tls resource is set to false by entering the following command: USD oc get quayregistry <registry_name> -n <namespace> -o yaml Example output # ... configBundleSecret: custom-ssl-config-bundle-secret # ... spec: components: - kind: tls managed: false # ... Verification Confirm a TLS connection to the server and port by entering the following command: USD openssl s_client -connect <quay-server.example.com>:443 Example output # ... SSL-Session: Protocol : TLSv1.3 Cipher : TLS_AES_256_GCM_SHA384 Session-ID: 0E995850DC3A8EB1A838E2FF06CE56DBA81BD8443E7FA05895FBD6FBDE9FE737 Session-ID-ctx: Resumption PSK: 1EA68F33C65A0F0FA2655BF9C1FE906152C6E3FEEE3AEB6B1B99BA7C41F06077989352C58E07CD2FBDC363FA8A542975 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 7200 (seconds) # ... steps Red Hat Quay features
[ "openssl genrsa -out rootCA.key 2048", "openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com", "openssl genrsa -out ssl.key 2048", "openssl req -new -key ssl.key -out ssl.csr", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []:", "[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112", "openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf", "ls /path/to/certificates", "rootCA.key ssl-bundle.cert ssl.key custom-ssl-config-bundle-secret.yaml rootCA.pem ssl.cert openssl.cnf rootCA.srl ssl.csr", "touch custom-ssl-config-bundle-secret.yaml", "oc -n <namespace> create secret generic custom-ssl-config-bundle-secret --from-file=config.yaml=</path/to/config.yaml> \\ 1 --from-file=ssl.cert=</path/to/ssl.cert> \\ 2 --from-file=extra_ca_cert_<name-of-certificate>.crt=ca-certificate-bundle.crt \\ 3 --from-file=ssl.key=</path/to/ssl.key> \\ 4 --dry-run=client -o yaml > custom-ssl-config-bundle-secret.yaml", "cat custom-ssl-config-bundle-secret.yaml", "apiVersion: v1 data: config.yaml: QUxMT1dfUFVMTFNfV0lUSE9VVF9TVFJJQ1RfTE9HR0lORzogZmFsc2UKQVVUSEVOVElDQVRJT05fVFlQRTogRGF0YWJhc2UKREVGQVVMVF9UQUdfRVhQSVJBVElPTjogMncKRElTVFJJQlVURURfU1R ssl.cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lVTUFBRk1YVWlWVHNoMGxNTWI3U1l0eFV5eTJjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZZ3hDekFKQmdOVkJBWVR extra_ca_cert_<name-of-certificate>:LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lVTUFBRk1YVWlWVHNoMGxNTWI3U1l0eFV5eTJjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZZ3hDe ssl.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2c0VWxZOVV1SVJPY1oKcFhpZk9MVEdqaS9neUxQMlpiMXQ kind: Secret metadata: creationTimestamp: null name: custom-ssl-config-bundle-secret namespace: <namespace>", "oc create -n <namespace> -f custom-ssl-config-bundle-secret.yaml", "secret/custom-ssl-config-bundle-secret created", "oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{\"spec\":{\"configBundleSecret\":\"custom-ssl-config-bundle-secret\"}}'", "quayregistry.quay.redhat.com/example-registry patched", "oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{\"spec\":{\"components\":[{\"kind\":\"tls\",\"managed\":false}]}}'", "quayregistry.quay.redhat.com/example-registry patched", "oc get quayregistry <registry_name> -n <namespace> -o yaml", "configBundleSecret: custom-ssl-config-bundle-secret spec: components: - kind: tls managed: false", "openssl s_client -connect <quay-server.example.com>:443", "SSL-Session: Protocol : TLSv1.3 Cipher : TLS_AES_256_GCM_SHA384 Session-ID: 0E995850DC3A8EB1A838E2FF06CE56DBA81BD8443E7FA05895FBD6FBDE9FE737 Session-ID-ctx: Resumption PSK: 1EA68F33C65A0F0FA2655BF9C1FE906152C6E3FEEE3AEB6B1B99BA7C41F06077989352C58E07CD2FBDC363FA8A542975 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 7200 (seconds)" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-custom-ssl-certs-config-bundle
Configuring GFS2 file systems
Configuring GFS2 file systems Red Hat Enterprise Linux 9 Planning, administering, troubleshooting, and configuring GFS2 file systems in a high availability cluster Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_gfs2_file_systems/index
Manage Red Hat Quay
Manage Red Hat Quay Red Hat Quay 3 Manage Red Hat Quay Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/manage_red_hat_quay/index
Chapter 15. Managing security context constraints
Chapter 15. Managing security context constraints In OpenShift Container Platform, you can use security context constraints (SCCs) to control permissions for the pods in your cluster. Default SCCs are created during installation and when you install some Operators or other components. As a cluster administrator, you can also create your own SCCs by using the OpenShift CLI ( oc ). Important Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. Additionally, the default SCC values are reset to the defaults during some cluster upgrades, which discards all customizations to those SCCs. Instead of modifying the default SCCs, create and modify your own SCCs as needed. For detailed steps, see Creating security context constraints . 15.1. About security context constraints Similar to the way that RBAC resources control user access, administrators can use security context constraints (SCCs) to control permissions for pods. These permissions determine the actions that a pod can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system. Security context constraints allow an administrator to control: Whether a pod can run privileged containers with the allowPrivilegedContainer flag Whether a pod is constrained with the allowPrivilegeEscalation flag The capabilities that a container can request The use of host directories as volumes The SELinux context of the container The container user ID The use of host namespaces and networking The allocation of an FSGroup that owns the pod volumes The configuration of allowable supplemental groups Whether a container requires write access to its root file system The usage of volume types The configuration of allowable seccomp profiles Important Do not set the openshift.io/run-level label on any namespaces in OpenShift Container Platform. This label is for use by internal OpenShift Container Platform components to manage the startup of major API groups, such as the Kubernetes API server and OpenShift API server. If the openshift.io/run-level label is set, no SCCs are applied to pods in that namespace, causing any workloads running in that namespace to be highly privileged. 15.1.1. Default security context constraints The cluster contains several default security context constraints (SCCs) as described in the table below. Additional SCCs might be installed when you install Operators or other components to OpenShift Container Platform. Important Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. Additionally, the default SCC values are reset to the defaults during some cluster upgrades, which discards all customizations to those SCCs. Instead of modifying the default SCCs, create and modify your own SCCs as needed. For detailed steps, see Creating security context constraints . Table 15.1. Default security context constraints Security context constraint Description anyuid Provides all features of the restricted SCC, but allows users to run with any UID and any GID. hostaccess Allows access to all host namespaces but still requires pods to be run with a UID and SELinux context that are allocated to the namespace. Warning This SCC allows host access to namespaces, file systems, and PIDs. It should only be used by trusted pods. Grant with caution. hostmount-anyuid Provides all the features of the restricted SCC, but allows host mounts and running as any UID and any GID on the system. Warning This SCC allows host file system access as any UID, including UID 0. Grant with caution. hostnetwork Allows using host networking and host ports but still requires pods to be run with a UID and SELinux context that are allocated to the namespace. Warning If additional workloads are run on control plane hosts, use caution when providing access to hostnetwork . A workload that runs hostnetwork on a control plane host is effectively root on the cluster and must be trusted accordingly. hostnetwork-v2 Like the hostnetwork SCC, but with the following differences: ALL capabilities are dropped from containers. The NET_BIND_SERVICE capability can be added explicitly. seccompProfile is set to runtime/default by default. allowPrivilegeEscalation must be unset or set to false in security contexts. node-exporter Used for the Prometheus node exporter. Warning This SCC allows host file system access as any UID, including UID 0. Grant with caution. nonroot Provides all features of the restricted SCC, but allows users to run with any non-root UID. The user must specify the UID or it must be specified in the manifest of the container runtime. nonroot-v2 Like the nonroot SCC, but with the following differences: ALL capabilities are dropped from containers. The NET_BIND_SERVICE capability can be added explicitly. seccompProfile is set to runtime/default by default. allowPrivilegeEscalation must be unset or set to false in security contexts. privileged Allows access to all privileged and host features and the ability to run as any user, any group, any FSGroup, and with any SELinux context. Warning This is the most relaxed SCC and should be used only for cluster administration. Grant with caution. The privileged SCC allows: Users to run privileged pods Pods to mount host directories as volumes Pods to run as any user Pods to run with any MCS label Pods to use the host's IPC namespace Pods to use the host's PID namespace Pods to use any FSGroup Pods to use any supplemental group Pods to use any seccomp profiles Pods to request any capabilities Note Setting privileged: true in the pod specification does not necessarily select the privileged SCC. The SCC that has allowPrivilegedContainer: true and has the highest prioritization will be chosen if the user has the permissions to use it. restricted Denies access to all host features and requires pods to be run with a UID, and SELinux context that are allocated to the namespace. The restricted SCC: Ensures that pods cannot run as privileged Ensures that pods cannot mount host directory volumes Requires that a pod is run as a user in a pre-allocated range of UIDs Requires that a pod is run with a pre-allocated MCS label Requires that a pod is run with a preallocated FSGroup Allows pods to use any supplemental group In clusters that were upgraded from OpenShift Container Platform 4.10 or earlier, this SCC is available for use by any authenticated user. The restricted SCC is no longer available to users of new OpenShift Container Platform 4.11 or later installations, unless the access is explicitly granted. restricted-v2 Like the restricted SCC, but with the following differences: ALL capabilities are dropped from containers. The NET_BIND_SERVICE capability can be added explicitly. seccompProfile is set to runtime/default by default. allowPrivilegeEscalation must be unset or set to false in security contexts. This is the most restrictive SCC provided by a new installation and will be used by default for authenticated users. Note The restricted-v2 SCC is the most restrictive of the SCCs that is included by default with the system. However, you can create a custom SCC that is even more restrictive. For example, you can create an SCC that restricts readOnlyRootFilesystem to true . 15.1.2. Security context constraints settings Security context constraints (SCCs) are composed of settings and strategies that control the security features a pod has access to. These settings fall into three categories: Category Description Controlled by a boolean Fields of this type default to the most restrictive value. For example, AllowPrivilegedContainer is always set to false if unspecified. Controlled by an allowable set Fields of this type are checked against the set to ensure their value is allowed. Controlled by a strategy Items that have a strategy to generate a value provide: A mechanism to generate the value, and A mechanism to ensure that a specified value falls into the set of allowable values. CRI-O has the following default list of capabilities that are allowed for each container of a pod: CHOWN DAC_OVERRIDE FSETID FOWNER SETGID SETUID SETPCAP NET_BIND_SERVICE KILL The containers use the capabilities from this default list, but pod manifest authors can alter the list by requesting additional capabilities or removing some of the default behaviors. Use the allowedCapabilities , defaultAddCapabilities , and requiredDropCapabilities parameters to control such requests from the pods. With these parameters you can specify which capabilities can be requested, which ones must be added to each container, and which ones must be forbidden, or dropped, from each container. Note You can drop all capabilites from containers by setting the requiredDropCapabilities parameter to ALL . This is what the restricted-v2 SCC does. 15.1.3. Security context constraints strategies RunAsUser MustRunAs - Requires a runAsUser to be configured. Uses the configured runAsUser as the default. Validates against the configured runAsUser . Example MustRunAs snippet ... runAsUser: type: MustRunAs uid: <id> ... MustRunAsRange - Requires minimum and maximum values to be defined if not using pre-allocated values. Uses the minimum as the default. Validates against the entire allowable range. Example MustRunAsRange snippet ... runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue> ... MustRunAsNonRoot - Requires that the pod be submitted with a non-zero runAsUser or have the USER directive defined in the image. No default provided. Example MustRunAsNonRoot snippet ... runAsUser: type: MustRunAsNonRoot ... RunAsAny - No default provided. Allows any runAsUser to be specified. Example RunAsAny snippet ... runAsUser: type: RunAsAny ... SELinuxContext MustRunAs - Requires seLinuxOptions to be configured if not using pre-allocated values. Uses seLinuxOptions as the default. Validates against seLinuxOptions . RunAsAny - No default provided. Allows any seLinuxOptions to be specified. SupplementalGroups MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against all ranges. RunAsAny - No default provided. Allows any supplementalGroups to be specified. FSGroup MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against the first ID in the first range. RunAsAny - No default provided. Allows any fsGroup ID to be specified. 15.1.4. Controlling volumes The usage of specific volume types can be controlled by setting the volumes field of the SCC. The allowable values of this field correspond to the volume sources that are defined when creating a volume: awsElasticBlockStore azureDisk azureFile cephFS cinder configMap csi downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk ephemeral gitRepo glusterfs hostPath iscsi nfs persistentVolumeClaim photonPersistentDisk portworxVolume projected quobyte rbd scaleIO secret storageos vsphereVolume * (A special value to allow the use of all volume types.) none (A special value to disallow the use of all volumes types. Exists only for backwards compatibility.) The recommended minimum set of allowed volumes for new SCCs are configMap , downwardAPI , emptyDir , persistentVolumeClaim , secret , and projected . Note This list of allowable volume types is not exhaustive because new types are added with each release of OpenShift Container Platform. Note For backwards compatibility, the usage of allowHostDirVolumePlugin overrides settings in the volumes field. For example, if allowHostDirVolumePlugin is set to false but allowed in the volumes field, then the hostPath value will be removed from volumes . 15.1.5. Admission control Admission control with SCCs allows for control over the creation of resources based on the capabilities granted to a user. In terms of the SCCs, this means that an admission controller can inspect the user information made available in the context to retrieve an appropriate set of SCCs. Doing so ensures the pod is authorized to make requests about its operating environment or to generate a set of constraints to apply to the pod. The set of SCCs that admission uses to authorize a pod are determined by the user identity and groups that the user belongs to. Additionally, if the pod specifies a service account, the set of allowable SCCs includes any constraints accessible to the service account. Note When you create a workload resource, such as deployment, only the service account is used to find the SCCs and admit the pods when they are created. Admission uses the following approach to create the final security context for the pod: Retrieve all SCCs available for use. Generate field values for security context settings that were not specified on the request. Validate the final settings against the available constraints. If a matching set of constraints is found, then the pod is accepted. If the request cannot be matched to an SCC, the pod is rejected. A pod must validate every field against the SCC. The following are examples for just two of the fields that must be validated: Note These examples are in the context of a strategy using the pre-allocated values. An FSGroup SCC strategy of MustRunAs If the pod defines a fsGroup ID, then that ID must equal the default fsGroup ID. Otherwise, the pod is not validated by that SCC and the SCC is evaluated. If the SecurityContextConstraints.fsGroup field has value RunAsAny and the pod specification omits the Pod.spec.securityContext.fsGroup , then this field is considered valid. Note that it is possible that during validation, other SCC settings will reject other pod fields and thus cause the pod to fail. A SupplementalGroups SCC strategy of MustRunAs If the pod specification defines one or more supplementalGroups IDs, then the pod's IDs must equal one of the IDs in the namespace's openshift.io/sa.scc.supplemental-groups annotation. Otherwise, the pod is not validated by that SCC and the SCC is evaluated. If the SecurityContextConstraints.supplementalGroups field has value RunAsAny and the pod specification omits the Pod.spec.securityContext.supplementalGroups , then this field is considered valid. Note that it is possible that during validation, other SCC settings will reject other pod fields and thus cause the pod to fail. 15.1.6. Security context constraints prioritization Security context constraints (SCCs) have a priority field that affects the ordering when attempting to validate a request by the admission controller. A priority value of 0 is the lowest possible priority. A nil priority is considered a 0 , or lowest, priority. Higher priority SCCs are moved to the front of the set when sorting. When the complete set of available SCCs is determined, the SCCs are ordered in the following manner: The highest priority SCCs are ordered first. If the priorities are equal, the SCCs are sorted from most restrictive to least restrictive. If both the priorities and restrictions are equal, the SCCs are sorted by name. By default, the anyuid SCC granted to cluster administrators is given priority in their SCC set. This allows cluster administrators to run pods as any user by specifying RunAsUser in the pod's SecurityContext . 15.2. About pre-allocated security context constraints values The admission controller is aware of certain conditions in the security context constraints (SCCs) that trigger it to look up pre-allocated values from a namespace and populate the SCC before processing the pod. Each SCC strategy is evaluated independently of other strategies, with the pre-allocated values, where allowed, for each policy aggregated with pod specification values to make the final values for the various IDs defined in the running pod. The following SCCs cause the admission controller to look for pre-allocated values when no ranges are defined in the pod specification: A RunAsUser strategy of MustRunAsRange with no minimum or maximum set. Admission looks for the openshift.io/sa.scc.uid-range annotation to populate range fields. An SELinuxContext strategy of MustRunAs with no level set. Admission looks for the openshift.io/sa.scc.mcs annotation to populate the level. A FSGroup strategy of MustRunAs . Admission looks for the openshift.io/sa.scc.supplemental-groups annotation. A SupplementalGroups strategy of MustRunAs . Admission looks for the openshift.io/sa.scc.supplemental-groups annotation. During the generation phase, the security context provider uses default values for any parameter values that are not specifically set in the pod. Default values are based on the selected strategy: RunAsAny and MustRunAsNonRoot strategies do not provide default values. If the pod needs a parameter value, such as a group ID, you must define the value in the pod specification. MustRunAs (single value) strategies provide a default value that is always used. For example, for group IDs, even if the pod specification defines its own ID value, the namespace's default parameter value also appears in the pod's groups. MustRunAsRange and MustRunAs (range-based) strategies provide the minimum value of the range. As with a single value MustRunAs strategy, the namespace's default parameter value appears in the running pod. If a range-based strategy is configurable with multiple ranges, it provides the minimum value of the first configured range. Note FSGroup and SupplementalGroups strategies fall back to the openshift.io/sa.scc.uid-range annotation if the openshift.io/sa.scc.supplemental-groups annotation does not exist on the namespace. If neither exists, the SCC is not created. Note By default, the annotation-based FSGroup strategy configures itself with a single range based on the minimum value for the annotation. For example, if your annotation reads 1/3 , the FSGroup strategy configures itself with a minimum and maximum value of 1 . If you want to allow more groups to be accepted for the FSGroup field, you can configure a custom SCC that does not use the annotation. Note The openshift.io/sa.scc.supplemental-groups annotation accepts a comma-delimited list of blocks in the format of <start>/<length or <start>-<end> . The openshift.io/sa.scc.uid-range annotation accepts only a single block. 15.3. Example security context constraints The following examples show the security context constraints (SCC) format and annotations: Annotated privileged SCC allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null 5 runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: 10 - '*' 1 A list of capabilities that a pod can request. An empty list means that none of capabilities can be requested while the special symbol * allows any capabilities. 2 A list of additional capabilities that are added to any pod. 3 The FSGroup strategy, which dictates the allowable values for the security context. 4 The groups that can access this SCC. 5 A list of capabilities to drop from a pod. Or, specify ALL to drop all capabilities. 6 The runAsUser strategy type, which dictates the allowable values for the security context. 7 The seLinuxContext strategy type, which dictates the allowable values for the security context. 8 The supplementalGroups strategy, which dictates the allowable supplemental groups for the security context. 9 The users who can access this SCC. 10 The allowable volume types for the security context. In the example, * allows the use of all volume types. The users and groups fields on the SCC control which users can access the SCC. By default, cluster administrators, nodes, and the build controller are granted access to the privileged SCC. All authenticated users are granted access to the restricted-v2 SCC. Without explicit runAsUser setting apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0 1 When a container or pod does not request a user ID under which it should be run, the effective UID depends on the SCC that emits this pod. Because the restricted-v2 SCC is granted to all authenticated users by default, it will be available to all users and service accounts and used in most cases. The restricted-v2 SCC uses MustRunAsRange strategy for constraining and defaulting the possible values of the securityContext.runAsUser field. The admission plugin will look for the openshift.io/sa.scc.uid-range annotation on the current project to populate range fields, as it does not provide this range. In the end, a container will have runAsUser equal to the first value of the range that is hard to predict because every project has different ranges. With explicit runAsUser setting apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0 1 A container or pod that requests a specific user ID will be accepted by OpenShift Container Platform only when a service account or a user is granted access to a SCC that allows such a user ID. The SCC can allow arbitrary IDs, an ID that falls into a range, or the exact user ID specific to the request. This configuration is valid for SELinux, fsGroup, and Supplemental Groups. 15.4. Creating security context constraints If the default security context constraints (SCCs) do not satisfy your application workload requirements, you can create a custom SCC by using the OpenShift CLI ( oc ). Important Creating and modifying your own SCCs are advanced operations that might cause instability to your cluster. If you have questions about using your own SCCs, contact Red Hat Support. For information about contacting Red Hat support, see Getting support . Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster as a user with the cluster-admin role. Procedure Define the SCC in a YAML file named scc-admin.yaml : kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group Optionally, you can drop specific capabilities for an SCC by setting the requiredDropCapabilities field with the desired values. Any specified capabilities are dropped from the container. To drop all capabilities, specify ALL . For example, to create an SCC that drops the KILL , MKNOD , and SYS_CHROOT capabilities, add the following to the SCC object: requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT Note You cannot list a capability in both allowedCapabilities and requiredDropCapabilities . CRI-O supports the same list of capability values that are found in the Docker documentation . Create the SCC by passing in the file: USD oc create -f scc-admin.yaml Example output securitycontextconstraints "scc-admin" created Verification Verify that the SCC was created: USD oc get scc scc-admin Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere] 15.5. Configuring a workload to require a specific SCC You can configure a workload to require a certain security context constraint (SCC). This is useful in scenarios where you want to pin a specific SCC to the workload or if you want to prevent your required SCC from being preempted by another SCC in the cluster. To require a specific SCC, set the openshift.io/required-scc annotation on your workload. You can set this annotation on any resource that can set a pod manifest template, such as a deployment or daemon set. The SCC must exist in the cluster and must be applicable to the workload, otherwise pod admission fails. An SCC is considered applicable to the workload if the user creating the pod or the pod's service account has use permissions for the SCC in the pod's namespace. Warning Do not change the openshift.io/required-scc annotation in the live pod's manifest, because doing so causes the pod admission to fail. To change the required SCC, update the annotation in the underlying pod template, which causes the pod to be deleted and re-created. Prerequisites The SCC must exist in the cluster. Procedure Create a YAML file for the deployment and specify a required SCC by setting the openshift.io/required-scc annotation: Example deployment.yaml apiVersion: config.openshift.io/v1 kind: Deployment apiVersion: apps/v1 spec: # ... template: metadata: annotations: openshift.io/required-scc: "my-scc" 1 # ... 1 Specify the name of the SCC to require. Create the resource by running the following command: USD oc create -f deployment.yaml Verification Verify that the deployment used the specified SCC: View the value of the pod's openshift.io/scc annotation by running the following command: USD oc get pod <pod_name> -o jsonpath='{.metadata.annotations.openshift\.io\/scc}{"\n"}' 1 1 Replace <pod_name> with the name of your deployment pod. Examine the output and confirm that the displayed SCC matches the SCC that you defined in the deployment: Example output my-scc 15.6. Role-based access to security context constraints You can specify SCCs as resources that are handled by RBAC. This allows you to scope access to your SCCs to a certain project or to the entire cluster. Assigning users, groups, or service accounts directly to an SCC retains cluster-wide scope. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. To include access to SCCs for your role, specify the scc resource when creating a role. USD oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace> This results in the following role definition: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: ... name: role-name 1 namespace: namespace 2 ... rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use 1 The role's name. 2 Namespace of the defined role. Defaults to default if not specified. 3 The API group that includes the SecurityContextConstraints resource. Automatically defined when scc is specified as a resource. 4 An example name for an SCC you want to have access. 5 Name of the resource group that allows users to specify SCC names in the resourceNames field. 6 A list of verbs to apply to the role. A local or cluster role with such a rule allows the subjects that are bound to it with a role binding or a cluster role binding to use the user-defined SCC called scc-name . Note Because RBAC is designed to prevent escalation, even project administrators are unable to grant access to an SCC. By default, they are not allowed to use the verb use on SCC resources, including the restricted-v2 SCC. 15.7. Reference of security context constraints commands You can manage security context constraints (SCCs) in your instance as normal API objects by using the OpenShift CLI ( oc ). Note You must have cluster-admin privileges to manage SCCs. 15.7.1. Listing security context constraints To get a current list of SCCs: USD oc get scc Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny 10 false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] hostaccess false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","hostPath","persistentVolumeClaim","projected","secret"] hostmount-anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","hostPath","nfs","persistentVolumeClaim","projected","secret"] hostnetwork false <no value> MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] hostnetwork-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] node-exporter true <no value> RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["*"] nonroot false <no value> MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] nonroot-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] privileged true ["*"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["*"] restricted false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] restricted-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] 15.7.2. Examining security context constraints You can view information about a particular SCC, including which users, service accounts, and groups the SCC is applied to. For example, to examine the restricted SCC: USD oc describe scc restricted Example output Name: restricted Priority: <none> Access: Users: <none> 1 Groups: <none> 2 Settings: Allow Privileged: false Allow Privilege Escalation: true Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none> 1 Lists which users and service accounts the SCC is applied to. 2 Lists which groups the SCC is applied to. Note To preserve customized SCCs during upgrades, do not edit settings on the default SCCs. 15.7.3. Updating security context constraints If your custom SCC no longer satisfies your application workloads requirements, you can update your SCC by using the OpenShift CLI ( oc ). To update an existing SCC: USD oc edit scc <scc_name> Important To preserve customized SCCs during upgrades, do not edit settings on the default SCCs. 15.7.4. Deleting security context constraints If you no longer require your custom SCC, you can delete the SCC by using the OpenShift CLI ( oc ). To delete an SCC: USD oc delete scc <scc_name> Important Do not delete default SCCs. If you delete a default SCC, it is regenerated by the Cluster Version Operator. 15.8. Additional resources Getting support
[ "runAsUser: type: MustRunAs uid: <id>", "runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue>", "runAsUser: type: MustRunAsNonRoot", "runAsUser: type: RunAsAny", "allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null 5 runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: 10 - '*'", "apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0", "apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0", "kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group", "requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT", "oc create -f scc-admin.yaml", "securitycontextconstraints \"scc-admin\" created", "oc get scc scc-admin", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere]", "apiVersion: config.openshift.io/v1 kind: Deployment apiVersion: apps/v1 spec: template: metadata: annotations: openshift.io/required-scc: \"my-scc\" 1", "oc create -f deployment.yaml", "oc get pod <pod_name> -o jsonpath='{.metadata.annotations.openshift\\.io\\/scc}{\"\\n\"}' 1", "my-scc", "oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace>", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: role-name 1 namespace: namespace 2 rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use", "oc get scc", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny 10 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostaccess false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"hostPath\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostmount-anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"hostPath\",\"nfs\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostnetwork false <no value> MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostnetwork-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] node-exporter true <no value> RunAsAny RunAsAny RunAsAny RunAsAny <no value> false [\"*\"] nonroot false <no value> MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] nonroot-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] privileged true [\"*\"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false [\"*\"] restricted false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] restricted-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]", "oc describe scc restricted", "Name: restricted Priority: <none> Access: Users: <none> 1 Groups: <none> 2 Settings: Allow Privileged: false Allow Privilege Escalation: true Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none>", "oc edit scc <scc_name>", "oc delete scc <scc_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authentication_and_authorization/managing-pod-security-policies
Chapter 6. Shutting down virtual machines
Chapter 6. Shutting down virtual machines To shut down a running virtual machine hosted on RHEL 9, use the command line or the web console GUI . 6.1. Shutting down a virtual machine by using the command line Shutting down a virtual machine (VM) requires different steps based on whether the VM is reponsive. Shutting down a responsive VM If you are connected to the guest , use a shutdown command or a GUI element appropriate to the guest operating system. Note In some environments, such as in Linux guests that use the GNOME Desktop, using the GUI power button for suspending or hibernating the guest might instead shut down the VM. Alternatively, use the virsh shutdown command on the host: If the VM is on a local host: If the VM is on a remote host, in this example 192.0.2.1 : Shutting down an unresponsive VM To force a VM to shut down, for example if it has become unresponsive, use the virsh destroy command on the host: Note The virsh destroy command does not actually delete or remove the VM configuration or disk images. It only terminates the running instance of the VM, similarly to pulling the power cord from a physical machine. In rare cases, virsh destroy may cause corruption of the VM's file system, so using this command is only recommended if all other shutdown methods have failed. Verification On the host, display the list of your VMs to see their status. 6.2. Shutting down and restarting virtual machines by using the web console Using the RHEL 9 web console, you can shut down or restart running virtual machines. You can also send a non-maskable interrupt to an unresponsive virtual machine. 6.2.1. Shutting down virtual machines in the web console If a virtual machine (VM) is in the running state, you can shut it down by using the RHEL 9 web console. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . Procedure In the Virtual Machines interface, find the row of the VM you want to shut down. On the right side of the row, click Shut Down . The VM shuts down. Troubleshooting If the VM does not shut down, click the Menu button ... to the Shut Down button and select Force Shut Down . To shut down an unresponsive VM, you can also send a non-maskable interrupt . Additional resources Starting virtual machines by using the web console Restarting virtual machines by using the web console 6.2.2. Restarting virtual machines by using the web console If a virtual machine (VM) is in the running state, you can restart it by using the RHEL 9 web console. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . Procedure In the Virtual Machines interface, find the row of the VM you want to restart. On the right side of the row, click the Menu button ... . A drop-down menu of actions appears. In the drop-down menu, click Reboot . The VM shuts down and restarts. Troubleshooting If the VM does not restart, click the Menu button ... to the Reboot button and select Force Reboot . To shut down an unresponsive VM, you can also send a non-maskable interrupt . Additional resources Starting virtual machines by using the web console Shutting down virtual machines in the web console 6.2.3. Sending non-maskable interrupts to VMs by using the web console Sending a non-maskable interrupt (NMI) may cause an unresponsive running virtual machine (VM) to respond or shut down. For example, you can send the Ctrl + Alt + Del NMI to a VM that is not responding to standard input. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the Virtual Machines interface, find the row of the VM to which you want to send an NMI. On the right side of the row, click the Menu button ... . A drop-down menu of actions appears. In the drop-down menu, click Send non-maskable interrupt . An NMI is sent to the VM. Additional resources Starting virtual machines by using the web console Restarting virtual machines by using the web console Shutting down virtual machines in the web console
[ "virsh shutdown demo-guest1 Domain 'demo-guest1' is being shutdown", "virsh -c qemu+ssh://[email protected]/system shutdown demo-guest1 [email protected]'s password: Domain 'demo-guest1' is being shutdown", "virsh destroy demo-guest1 Domain 'demo-guest1' destroyed", "virsh list --all Id Name State ------------------------------------------ 1 demo-guest1 shut off" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_shutting-down-virtual-machines_configuring-and-managing-virtualization
Chapter 6. Collecting OpenShift sandboxed containers data for Red Hat Support
Chapter 6. Collecting OpenShift sandboxed containers data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including virtual machines and other data related to OpenShift sandboxed containers. For prompt support, supply diagnostic information for both OpenShift Container Platform and OpenShift sandboxed containers. 6.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... 6.2. About collecting OpenShift sandboxed containers data You can use the oc adm must-gather CLI command to collect information about your cluster. The following features and objects are associated with OpenShift sandboxed containers: All namespaces and their child objects that belong to any OpenShift sandboxed containers resources All OpenShift sandboxed containers custom resource definitions (CRDs) The oc adm must-gather CLI command collects the following component logs: QEMU logs Audit logs OpenShift sandboxed containers logs CRI-O logs These component logs are collected as long as there is at least one pod running with the kata runtime. To collect OpenShift sandboxed containers data with must-gather , you must specify the OpenShift sandboxed containers image: --image=registry.redhat.io/openshift-sandboxed-containers-tech-preview/osc-must-gather-rhel8:1.1.0 6.3. Additional resources For more information about gathering data for support, see Gathering data about your cluster .
[ "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "--image=registry.redhat.io/openshift-sandboxed-containers-tech-preview/osc-must-gather-rhel8:1.1.0" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/sandboxed_containers_support_for_openshift/troubleshooting-sandboxed-containers
Chapter 5. Installing the Red Hat Virtualization Manager
Chapter 5. Installing the Red Hat Virtualization Manager The RHV-M Appliance is installed during the deployment process; however, if required, you can install it on the deployment host before starting the installation: Manually installing the Manager virtual machine is not supported. 5.1. Deploying the Self-Hosted Engine Using the Command Line You can deploy a self-hosted engine from the command line. After installing the setup package, you run the command hosted-engine --deploy , and a script collects the details of your environment and uses them to configure the host and the Manager. Prerequisites FQDNs prepared for your Manager and the host. Forward and reverse lookup records must both be set in the DNS. Procedure Install the deployment tool: Red Hat recommends using the screen window manager to run the script to avoid losing the session in case of network or terminal disruption. Install and start screen : Start the deployment script: Note To escape the script at any time, use the Ctrl + D keyboard combination to abort deployment. In the event of session timeout or connection disruption, run screen -d -r to recover the deployment session. When prompted, enter Yes to begin the deployment: Configure the network. Check that the gateway shown is correct and press Enter . Enter a pingable address on the same subnet so the script can check the host's connectivity. The script detects possible NICs to use as a management bridge for the environment. Enter one of them or press Enter to accept the default. If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance. Enter the virtual machine's CPU and memory configuration: Specify the FQDN for the Manager virtual machine, such as manager.example.com : Specify the domain of the Manager virtual machine. For example, if the FQDN is manager.example.com , then enter example.com . Create the root password for the Manager, and reenter it to confirm: Optionally, enter an SSH public key to enable you to log in to the Manager as the root user without entering a password, and specify whether to enable SSH access for the root user: Enter a MAC address for the Manager virtual machine, or accept a randomly generated one. If you want to provide the Manager virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you. Enter the virtual machine's networking details: If you specified Static , enter the IP address of the Manager: Important The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine's IP must be in the same subnet range (10.1.1.1-254/24). For IPv6, Red Hat Virtualization supports only static addressing. Specify whether to add entries for the Manager virtual machine and the base host to the virtual machine's /etc/hosts file. You must ensure that the host names are resolvable. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications. Alternatively, press Enter to accept the defaults: Create a password for the admin@internal user to access the Administration Portal and reenter it to confirm: The script creates the virtual machine. This can take some time if it needs to install the RHV-M Appliance. After creating the virtual machine, the script continues to gather information. Select the type of storage to use: For NFS, enter the version, full address and path to the storage, and any mount options: For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group. Note To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Red Hat Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options. For Gluster storage, enter the full address and path to the storage, and any mount options: Important Only replica 3 Gluster storage is supported. Ensure you have the following configuration: In the /etc/glusterfs/glusterd.vol file on all three Gluster servers, set rpc-auth-allow-insecure to on . Configure the volume as follows: For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide . Enter the Manager disk size: When the deployment completes successfully, one data center, cluster, host, storage domain, and the Manager virtual machine are already running. You can log in to the Administration Portal to add any other resources. Optionally, add a directory server using the ovirt-engine-extension-aaa-ldap-setup interactive setup script so you can add additional users to the environment. For more information, see Configuring an External LDAP Provider in the Administration Guide . The Manager virtual machine, the host running it, and the self-hosted engine storage domain are flagged with a gold crown in the Administration Portal. Enabling the Red Hat Virtualization Manager repositories is not part of the automated installation. Log in to the Manager virtual machine to register it with the Content Delivery Network: 5.2. Enabling the Red Hat Virtualization Manager Repositories Register the system with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: Use the pool ID to attach the subscription to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: Log in to the Administration Portal, where you can add hosts and storage to the environment: 5.3. Connecting to the Administration Portal Access the Administration Portal using a web browser. In a web browser, navigate to https:// manager-fqdn /ovirt-engine , replacing manager-fqdn with the FQDN that you provided during installation. Note You can access the Administration Portal using alternate host names or IP addresses. To do so, you need to add a configuration file under /etc/ovirt-engine/engine.conf.d/ . For example: The list of alternate host names needs to be separated by spaces. You can also add the IP address of the Manager to the list, but using IP addresses instead of DNS-resolvable host names is not recommended. Click Administration Portal . An SSO login page displays. SSO login enables you to log in to the Administration and VM Portal at the same time. Enter your User Name and Password . If you are logging in for the first time, use the user name admin along with the password that you specified during installation. Select the Domain to authenticate against. If you are logging in using the internal admin user name, select the internal domain. Click Log In . You can view the Administration Portal in multiple languages. The default selection is chosen based on the locale settings of your web browser. If you want to view the Administration Portal in a language other than the default, select your preferred language from the drop-down list on the welcome page. To log out of the Red Hat Virtualization Administration Portal, click your user name in the header bar and click Sign Out . You are logged out of all portals and the Manager welcome screen displays.
[ "yum install rhvm-appliance", "yum install ovirt-hosted-engine-setup", "yum install screen screen", "hosted-engine --deploy", "Continuing will configure this host for serving as hypervisor and will create a local VM with a running engine. The locally running engine will be used to configure a new storage domain and create a VM there. At the end the disk of the local VM will be moved to the shared storage. Are you sure you want to continue? (Yes, No)[Yes]:", "Please indicate a pingable gateway IP address [X.X.X.X]:", "Please indicate a nic to set ovirtmgmt bridge on: (eth1, eth0) [eth1]:", "If you want to deploy with a custom engine appliance image, please specify the path to the OVA archive you would like to use (leave it empty to skip, the setup will use rhvm-appliance rpm installing it if missing):", "Please specify the number of virtual CPUs for the VM (Defaults to appliance OVF value): [4]: Please specify the memory size of the VM in MB (Defaults to maximum available): [7267]:", "Please provide the FQDN you would like to use for the engine appliance. Note: This will be the FQDN of the engine VM you are now going to launch, it should not point to the base host or to any other existing machine. Engine VM FQDN:", "Please provide the domain name you would like to use for the engine appliance. Engine VM domain: [example.com]", "Enter root password that will be used for the engine appliance: Confirm appliance root password:", "Enter ssh public key for the root user that will be used for the engine appliance (leave it empty to skip): Do you want to enable ssh access for the root user (yes, no, without-password) [yes]:", "You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:3d:34:47]:", "How should the engine VM network be configured (DHCP, Static)[DHCP]?", "Please enter the IP address to be used for the engine VM [x.x.x.x]: Please provide a comma-separated list (max 3) of IP addresses of domain name servers for the engine VM Engine VM DNS (leave it empty to skip):", "Add lines for the appliance itself and for this host to /etc/hosts on the engine VM? Note: ensuring that this host could resolve the engine VM hostname is still up to you (Yes, No)[No]", "Please provide the name of the SMTP server through which we will send notifications [localhost]: Please provide the TCP port number of the SMTP server [25]: Please provide the email address from which notifications will be sent [root@localhost]: Please provide a comma-separated list of email addresses which will get notifications [root@localhost]:", "Enter engine admin password: Confirm engine admin password:", "Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:", "Please specify the nfs version you would like to use (auto, v3, v4, v4_1)[auto]: Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/nfs If needed, specify additional mount options for the connection to the hosted-engine storage domain []:", "Please specify the iSCSI portal IP address: Please specify the iSCSI portal port [3260]: Please specify the iSCSI discover user: Please specify the iSCSI discover password: Please specify the iSCSI portal login user: Please specify the iSCSI portal login password: The following targets have been found: [1] iqn.2017-10.com.redhat.example:he TPGT: 1, portals: 192.168.1.xxx:3260 192.168.2.xxx:3260 192.168.3.xxx:3260 Please select a target (1) [1]: 1 The following luns have been found on the requested target: [1] 360003ff44dc75adcb5046390a16b4beb 199GiB MSFT Virtual HD status: free, paths: 1 active Please select the destination LUN (1) [1]:", "option rpc-auth-allow-insecure on", "gluster volume set _volume_ cluster.quorum-type auto gluster volume set _volume_ network.ping-timeout 10 gluster volume set _volume_ auth.allow \\* gluster volume set _volume_ group virt gluster volume set _volume_ storage.owner-uid 36 gluster volume set _volume_ storage.owner-gid 36 gluster volume set _volume_ server.allow-insecure on", "Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/gluster_volume If needed, specify additional mount options for the connection to the hosted-engine storage domain []:", "The following luns have been found on the requested target: [1] 3514f0c5447600351 30GiB XtremIO XtremApp status: used, paths: 2 active [2] 3514f0c5447600352 30GiB XtremIO XtremApp status: used, paths: 2 active Please select the destination LUN (1, 2) [1]:", "Please specify the size of the VM disk in GB: [50]:", "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= pool_id", "subscription-manager list --consumed", "yum repolist", "subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-supplementary-rpms --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=rhel-7-server-rhv-4-manager-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms", "vi /etc/ovirt-engine/engine.conf.d/99-custom-sso-setup.conf SSO_ALTERNATE_ENGINE_FQDNS=\" alias1.example.com alias2.example.com \"" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/Installing_the_Red_Hat_Virtualization_Manager_SHE_cli_deploy
Chapter 4. Managing groups
Chapter 4. Managing groups You can use Identity Service (keystone) groups to assign consistent permissions to multiple user accounts. 4.1. Configuring groups with the CLI Create a group and assign permissions to the group. Members of the group inherit the same permissions that you assign to the group: Create the group grp-Auditors : View a list of keystone groups: Grant the grp-Auditors group permission to access the demo project, while using the member role: Add the existing user user1 to the grp-Auditors group: Confirm that user1 is a member of grp-Auditors : Review the effective permissions that have been assigned to user1 : 4.2. Configuring groups with the Dashboard You can use the dashboard to manage the membership of keystone groups. However, you must use the command-line to assign role permissions to a group. For more information, see Configuring groups with the CLI . 4.2.1. Creating a group Log in to the dashboard as a user with administrative privileges. Select Identity > Groups . Click +Create Group . Enter a name and description for the group. Click Create Group . 4.2.2. Managing Group membership You can use the dashboard to manage the membership of keystone groups. Log in to the dashboard as a user with administrative privileges. Select Identity > Groups . Click Manage Members for the group that you want to edit. Use Add users to add a user to the group. If you want to remove a user, mark its checkbox and click Remove users .
[ "openstack group create grp-Auditors +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | id | 2a4856fc242142a4aa7c02d28edfdfff | | name | grp-Auditors | +-------------+----------------------------------+", "openstack group list --long +----------------------------------+--------------+-----------+-------------+ | ID | Name | Domain ID | Description | +----------------------------------+--------------+-----------+-------------+ | 2a4856fc242142a4aa7c02d28edfdfff | grp-Auditors | default | | +----------------------------------+--------------+-----------+-------------+", "openstack role add member --group grp-Auditors --project demo", "openstack group add user grp-Auditors user1 user1 added to group grp-Auditors", "openstack group contains user grp-Auditors user1 user1 in group grp-Auditors", "openstack role assignment list --effective --user user1 +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | 9fe2ff9ee4384b1894a90878d3e92bab | 3fefe5b4f6c948e6959d1feaef4822f2 | | 0ce36252e2fb4ea8983bed2a568fa832 | | False | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/users_and_identity_management_guide/managing_groups
Chapter 14. Overview of NVMe over fabric devices
Chapter 14. Overview of NVMe over fabric devices Non-volatile Memory ExpressTM (NVMeTM) is an interface that allows host software utility to communicate with solid state drives. Use the following types of fabric transport to configure NVMe over fabric devices: NVMe over Remote Direct Memory Access (NVMe/RDMA) For information about how to configure NVMeTM/RDMA, see Configuring NVMe over fabrics using NVMe/RDMA . NVMe over Fibre Channel (NVMe/FC) For information about how to configure NVMeTM/FC, see Configuring NVMe over fabrics using NVMe/FC . When using NVMe over fabrics, the solid-state drive does not have to be local to your system; it can be configured remotely through a NVMe over fabrics devices.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_storage_devices/overview-of-nvme-over-fabric-devices_managing-storage-devices
Release notes
Release notes Red Hat OpenShift AI Self-Managed 2.18 Features, enhancements, resolved issues, and known issues associated with this release
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/release_notes/index
Configure Red Hat Quay
Configure Red Hat Quay Red Hat Quay 3.10 Customizing Red Hat Quay using configuration options Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/configure_red_hat_quay/index
5.2.31. /proc/version
5.2.31. /proc/version This file specifies the version of the Linux kernel and gcc in use, as well as the version of Red Hat Enterprise Linux installed on the system: This information is used for a variety of purposes, including the version data presented when a user logs in.
[ "Linux version 2.6.8-1.523 ([email protected]) (gcc version 3.4.1 20040714 (Red Hat Enterprise Linux 3.4.1-7)) #1 Mon Aug 16 13:27:03 EDT 2004" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-version
Chapter 6. Pipelines CLI (tkn)
Chapter 6. Pipelines CLI (tkn) 6.1. Installing tkn Use the CLI tool to manage Red Hat OpenShift Pipelines from a terminal. The following section describes how to install the CLI tool on different platforms. You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools . :FeatureName: Running Red Hat OpenShift Pipelines on ARM hardware Important {FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note Both the archives and the RPMs contain the following executables: tkn tkn-pac opc Important Running Red Hat OpenShift Pipelines with the opc CLI tool is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.1.1. Installing the Red Hat OpenShift Pipelines CLI on Linux For Linux distributions, you can download the CLI as a tar.gz archive. Procedure Download the relevant CLI tool. Linux (x86_64, amd64) Linux on IBM Z(R) and IBM(R) LinuxONE (s390x) Linux on IBM Power(R) (ppc64le) Linux on ARM (aarch64, arm64) Unpack the archive: USD tar xvzf <file> Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: USD echo USDPATH 6.1.2. Installing the Red Hat OpenShift Pipelines CLI on Linux using an RPM For Red Hat Enterprise Linux (RHEL) version 8, you can install the Red Hat OpenShift Pipelines CLI as an RPM. Prerequisites You have an active OpenShift Container Platform subscription on your Red Hat account. You have root or sudo privileges on your local system. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*pipelines*' In the output for the command, find the pool ID for your OpenShift Container Platform subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by Red Hat OpenShift Pipelines: Linux (x86_64, amd64) # subscription-manager repos --enable="pipelines-1.17-for-rhel-8-x86_64-rpms" Linux on IBM Z(R) and IBM(R) LinuxONE (s390x) # subscription-manager repos --enable="pipelines-1.17-for-rhel-8-s390x-rpms" Linux on IBM Power(R) (ppc64le) # subscription-manager repos --enable="pipelines-1.17-for-rhel-8-ppc64le-rpms" Linux on ARM (aarch64, arm64) # subscription-manager repos --enable="pipelines-1.17-for-rhel-8-aarch64-rpms" Install the openshift-pipelines-client package: # yum install openshift-pipelines-client After you install the CLI, it is available using the tkn command: USD tkn version 6.1.3. Installing the Red Hat OpenShift Pipelines CLI on Windows For Windows, you can download the CLI as a zip archive. Procedure Download the CLI tool . Extract the archive with a ZIP program. Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: C:\> path 6.1.4. Installing the Red Hat OpenShift Pipelines CLI on macOS For macOS, you can download the CLI as a tar.gz archive. Procedure Download the relevant CLI tool. macOS macOS on ARM Unpack and extract the archive. Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: USD echo USDPATH 6.2. Configuring the OpenShift Pipelines tkn CLI Configure the Red Hat OpenShift Pipelines tkn CLI to enable tab completion. 6.2.1. Enabling tab completion After you install the tkn CLI, you can enable tab completion to automatically complete tkn commands or suggest options when you press Tab. Prerequisites You must have the tkn CLI tool installed. You must have bash-completion installed on your local system. Procedure The following procedure enables tab completion for Bash. Save the Bash completion code to a file: USD tkn completion bash > tkn_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp tkn_bash_completion /etc/bash_completion.d/ Alternatively, you can save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal. 6.3. OpenShift Pipelines tkn reference This section lists the basic tkn CLI commands. 6.3.1. Basic syntax tkn [command or options] [arguments... ] 6.3.2. Global options --help, -h 6.3.3. Utility commands 6.3.3.1. tkn Parent command for tkn CLI. Example: Display all options USD tkn 6.3.3.2. completion [shell] Print shell completion code which must be evaluated to provide interactive completion. Supported shells are bash and zsh . Example: Completion code for bash shell USD tkn completion bash 6.3.3.3. version Print version information of the tkn CLI. Example: Check the tkn version USD tkn version 6.3.4. Pipelines management commands 6.3.4.1. pipeline Manage pipelines. Example: Display help USD tkn pipeline --help 6.3.4.2. pipeline delete Delete a pipeline. Example: Delete the mypipeline pipeline from a namespace USD tkn pipeline delete mypipeline -n myspace 6.3.4.3. pipeline describe Describe a pipeline. Example: Describe the mypipeline pipeline USD tkn pipeline describe mypipeline 6.3.4.4. pipeline list Display a list of pipelines. Example: Display a list of pipelines USD tkn pipeline list 6.3.4.5. pipeline logs Display the logs for a specific pipeline. Example: Stream the live logs for the mypipeline pipeline USD tkn pipeline logs -f mypipeline 6.3.4.6. pipeline start Start a pipeline. Example: Start the mypipeline pipeline USD tkn pipeline start mypipeline 6.3.5. Pipeline run commands 6.3.5.1. pipelinerun Manage pipeline runs. Example: Display help USD tkn pipelinerun -h 6.3.5.2. pipelinerun cancel Cancel a pipeline run. Example: Cancel the mypipelinerun pipeline run from a namespace USD tkn pipelinerun cancel mypipelinerun -n myspace 6.3.5.3. pipelinerun delete Delete a pipeline run. Example: Delete pipeline runs from a namespace USD tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace Example: Delete all pipeline runs from a namespace, except the five most recently executed pipeline runs USD tkn pipelinerun delete -n myspace --keep 5 1 1 Replace 5 with the number of most recently executed pipeline runs you want to retain. Example: Delete all pipelines USD tkn pipelinerun delete --all Note Starting with Red Hat OpenShift Pipelines 1.6, the tkn pipelinerun delete --all command does not delete any resources that are in the running state. 6.3.5.4. pipelinerun describe Describe a pipeline run. Example: Describe the mypipelinerun pipeline run in a namespace USD tkn pipelinerun describe mypipelinerun -n myspace 6.3.5.5. pipelinerun list List pipeline runs. Example: Display a list of pipeline runs in a namespace USD tkn pipelinerun list -n myspace 6.3.5.6. pipelinerun logs Display the logs of a pipeline run. Example: Display the logs of the mypipelinerun pipeline run with all tasks and steps in a namespace USD tkn pipelinerun logs mypipelinerun -a -n myspace 6.3.6. Task management commands 6.3.6.1. task Manage tasks. Example: Display help USD tkn task -h 6.3.6.2. task delete Delete a task. Example: Delete mytask1 and mytask2 tasks from a namespace USD tkn task delete mytask1 mytask2 -n myspace 6.3.6.3. task describe Describe a task. Example: Describe the mytask task in a namespace USD tkn task describe mytask -n myspace 6.3.6.4. task list List tasks. Example: List all the tasks in a namespace USD tkn task list -n myspace 6.3.6.5. task logs Display task logs. Example: Display logs for the mytaskrun task run of the mytask task USD tkn task logs mytask mytaskrun -n myspace 6.3.6.6. task start Start a task. Example: Start the mytask task in a namespace USD tkn task start mytask -s <ServiceAccountName> -n myspace 6.3.7. Task run commands 6.3.7.1. taskrun Manage task runs. Example: Display help USD tkn taskrun -h 6.3.7.2. taskrun cancel Cancel a task run. Example: Cancel the mytaskrun task run from a namespace USD tkn taskrun cancel mytaskrun -n myspace 6.3.7.3. taskrun delete Delete a TaskRun. Example: Delete the mytaskrun1 and mytaskrun2 task runs from a namespace USD tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace Example: Delete all but the five most recently executed task runs from a namespace USD tkn taskrun delete -n myspace --keep 5 1 1 Replace 5 with the number of most recently executed task runs you want to retain. 6.3.7.4. taskrun describe Describe a task run. Example: Describe the mytaskrun task run in a namespace USD tkn taskrun describe mytaskrun -n myspace 6.3.7.5. taskrun list List task runs. Example: List all the task runs in a namespace USD tkn taskrun list -n myspace 6.3.7.6. taskrun logs Display task run logs. Example: Display live logs for the mytaskrun task run in a namespace USD tkn taskrun logs -f mytaskrun -n myspace 6.3.8. Condition management commands 6.3.8.1. condition Manage Conditions. Example: Display help USD tkn condition --help 6.3.8.2. condition delete Delete a Condition. Example: Delete the mycondition1 Condition from a namespace USD tkn condition delete mycondition1 -n myspace 6.3.8.3. condition describe Describe a Condition. Example: Describe the mycondition1 Condition in a namespace USD tkn condition describe mycondition1 -n myspace 6.3.8.4. condition list List Conditions. Example: List Conditions in a namespace USD tkn condition list -n myspace 6.3.9. Pipeline Resource management commands 6.3.9.1. resource Manage Pipeline Resources. Example: Display help USD tkn resource -h 6.3.9.2. resource create Create a Pipeline Resource. Example: Create a Pipeline Resource in a namespace USD tkn resource create -n myspace This is an interactive command that asks for input on the name of the Resource, type of the Resource, and the values based on the type of the Resource. 6.3.9.3. resource delete Delete a Pipeline Resource. Example: Delete the myresource Pipeline Resource from a namespace USD tkn resource delete myresource -n myspace 6.3.9.4. resource describe Describe a Pipeline Resource. Example: Describe the myresource Pipeline Resource USD tkn resource describe myresource -n myspace 6.3.9.5. resource list List Pipeline Resources. Example: List all Pipeline Resources in a namespace USD tkn resource list -n myspace 6.3.10. ClusterTask management commands Important In Red Hat OpenShift Pipelines 1.10, ClusterTask functionality of the tkn command line utility is deprecated and is planned to be removed in a future release. 6.3.10.1. clustertask Manage ClusterTasks. Example: Display help USD tkn clustertask --help 6.3.10.2. clustertask delete Delete a ClusterTask resource in a cluster. Example: Delete mytask1 and mytask2 ClusterTasks USD tkn clustertask delete mytask1 mytask2 6.3.10.3. clustertask describe Describe a ClusterTask. Example: Describe the mytask ClusterTask USD tkn clustertask describe mytask1 6.3.10.4. clustertask list List ClusterTasks. Example: List ClusterTasks USD tkn clustertask list 6.3.10.5. clustertask start Start ClusterTasks. Example: Start the mytask ClusterTask USD tkn clustertask start mytask 6.3.11. Trigger management commands 6.3.11.1. eventlistener Manage EventListeners. Example: Display help USD tkn eventlistener -h 6.3.11.2. eventlistener delete Delete an EventListener. Example: Delete mylistener1 and mylistener2 EventListeners in a namespace USD tkn eventlistener delete mylistener1 mylistener2 -n myspace 6.3.11.3. eventlistener describe Describe an EventListener. Example: Describe the mylistener EventListener in a namespace USD tkn eventlistener describe mylistener -n myspace 6.3.11.4. eventlistener list List EventListeners. Example: List all the EventListeners in a namespace USD tkn eventlistener list -n myspace 6.3.11.5. eventlistener logs Display logs of an EventListener. Example: Display the logs of the mylistener EventListener in a namespace USD tkn eventlistener logs mylistener -n myspace 6.3.11.6. triggerbinding Manage TriggerBindings. Example: Display TriggerBindings help USD tkn triggerbinding -h 6.3.11.7. triggerbinding delete Delete a TriggerBinding. Example: Delete mybinding1 and mybinding2 TriggerBindings in a namespace USD tkn triggerbinding delete mybinding1 mybinding2 -n myspace 6.3.11.8. triggerbinding describe Describe a TriggerBinding. Example: Describe the mybinding TriggerBinding in a namespace USD tkn triggerbinding describe mybinding -n myspace 6.3.11.9. triggerbinding list List TriggerBindings. Example: List all the TriggerBindings in a namespace USD tkn triggerbinding list -n myspace 6.3.11.10. triggertemplate Manage TriggerTemplates. Example: Display TriggerTemplate help USD tkn triggertemplate -h 6.3.11.11. triggertemplate delete Delete a TriggerTemplate. Example: Delete mytemplate1 and mytemplate2 TriggerTemplates in a namespace USD tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace` 6.3.11.12. triggertemplate describe Describe a TriggerTemplate. Example: Describe the mytemplate TriggerTemplate in a namespace USD tkn triggertemplate describe mytemplate -n `myspace` 6.3.11.13. triggertemplate list List TriggerTemplates. Example: List all the TriggerTemplates in a namespace USD tkn triggertemplate list -n myspace 6.3.11.14. clustertriggerbinding Manage ClusterTriggerBindings. Example: Display ClusterTriggerBindings help USD tkn clustertriggerbinding -h 6.3.11.15. clustertriggerbinding delete Delete a ClusterTriggerBinding. Example: Delete myclusterbinding1 and myclusterbinding2 ClusterTriggerBindings USD tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2 6.3.11.16. clustertriggerbinding describe Describe a ClusterTriggerBinding. Example: Describe the myclusterbinding ClusterTriggerBinding USD tkn clustertriggerbinding describe myclusterbinding 6.3.11.17. clustertriggerbinding list List ClusterTriggerBindings. Example: List all ClusterTriggerBindings USD tkn clustertriggerbinding list 6.3.12. Hub interaction commands Interact with Tekton Hub for resources such as tasks and pipelines. 6.3.12.1. hub Interact with hub. Example: Display help USD tkn hub -h Example: Interact with a hub API server USD tkn hub --api-server https://api.hub.tekton.dev Note For each example, to get the corresponding sub-commands and flags, run tkn hub <command> --help . 6.3.12.2. hub downgrade Downgrade an installed resource. Example: Downgrade the mytask task in the mynamespace namespace to its older version USD tkn hub downgrade task mytask --to version -n mynamespace 6.3.12.3. hub get Get a resource manifest by its name, kind, catalog, and version. Example: Get the manifest for a specific version of the myresource pipeline or task from the tekton catalog USD tkn hub get [pipeline | task] myresource --from tekton --version version 6.3.12.4. hub info Display information about a resource by its name, kind, catalog, and version. Example: Display information about a specific version of the mytask task from the tekton catalog USD tkn hub info task mytask --from tekton --version version 6.3.12.5. hub install Install a resource from a catalog by its kind, name, and version. Example: Install a specific version of the mytask task from the tekton catalog in the mynamespace namespace USD tkn hub install task mytask --from tekton --version version -n mynamespace 6.3.12.6. hub reinstall Reinstall a resource by its kind and name. Example: Reinstall a specific version of the mytask task from the tekton catalog in the mynamespace namespace USD tkn hub reinstall task mytask --from tekton --version version -n mynamespace 6.3.12.7. hub search Search a resource by a combination of name, kind, and tags. Example: Search a resource with a tag cli USD tkn hub search --tags cli 6.3.12.8. hub upgrade Upgrade an installed resource. Example: Upgrade the installed mytask task in the mynamespace namespace to a new version USD tkn hub upgrade task mytask --to version -n mynamespace
[ "tar xvzf <file>", "echo USDPATH", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches '*pipelines*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"pipelines-1.17-for-rhel-8-x86_64-rpms\"", "subscription-manager repos --enable=\"pipelines-1.17-for-rhel-8-s390x-rpms\"", "subscription-manager repos --enable=\"pipelines-1.17-for-rhel-8-ppc64le-rpms\"", "subscription-manager repos --enable=\"pipelines-1.17-for-rhel-8-aarch64-rpms\"", "yum install openshift-pipelines-client", "tkn version", "C:\\> path", "echo USDPATH", "tkn completion bash > tkn_bash_completion", "sudo cp tkn_bash_completion /etc/bash_completion.d/", "tkn", "tkn completion bash", "tkn version", "tkn pipeline --help", "tkn pipeline delete mypipeline -n myspace", "tkn pipeline describe mypipeline", "tkn pipeline list", "tkn pipeline logs -f mypipeline", "tkn pipeline start mypipeline", "tkn pipelinerun -h", "tkn pipelinerun cancel mypipelinerun -n myspace", "tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace", "tkn pipelinerun delete -n myspace --keep 5 1", "tkn pipelinerun delete --all", "tkn pipelinerun describe mypipelinerun -n myspace", "tkn pipelinerun list -n myspace", "tkn pipelinerun logs mypipelinerun -a -n myspace", "tkn task -h", "tkn task delete mytask1 mytask2 -n myspace", "tkn task describe mytask -n myspace", "tkn task list -n myspace", "tkn task logs mytask mytaskrun -n myspace", "tkn task start mytask -s <ServiceAccountName> -n myspace", "tkn taskrun -h", "tkn taskrun cancel mytaskrun -n myspace", "tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace", "tkn taskrun delete -n myspace --keep 5 1", "tkn taskrun describe mytaskrun -n myspace", "tkn taskrun list -n myspace", "tkn taskrun logs -f mytaskrun -n myspace", "tkn condition --help", "tkn condition delete mycondition1 -n myspace", "tkn condition describe mycondition1 -n myspace", "tkn condition list -n myspace", "tkn resource -h", "tkn resource create -n myspace", "tkn resource delete myresource -n myspace", "tkn resource describe myresource -n myspace", "tkn resource list -n myspace", "tkn clustertask --help", "tkn clustertask delete mytask1 mytask2", "tkn clustertask describe mytask1", "tkn clustertask list", "tkn clustertask start mytask", "tkn eventlistener -h", "tkn eventlistener delete mylistener1 mylistener2 -n myspace", "tkn eventlistener describe mylistener -n myspace", "tkn eventlistener list -n myspace", "tkn eventlistener logs mylistener -n myspace", "tkn triggerbinding -h", "tkn triggerbinding delete mybinding1 mybinding2 -n myspace", "tkn triggerbinding describe mybinding -n myspace", "tkn triggerbinding list -n myspace", "tkn triggertemplate -h", "tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace`", "tkn triggertemplate describe mytemplate -n `myspace`", "tkn triggertemplate list -n myspace", "tkn clustertriggerbinding -h", "tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2", "tkn clustertriggerbinding describe myclusterbinding", "tkn clustertriggerbinding list", "tkn hub -h", "tkn hub --api-server https://api.hub.tekton.dev", "tkn hub downgrade task mytask --to version -n mynamespace", "tkn hub get [pipeline | task] myresource --from tekton --version version", "tkn hub info task mytask --from tekton --version version", "tkn hub install task mytask --from tekton --version version -n mynamespace", "tkn hub reinstall task mytask --from tekton --version version -n mynamespace", "tkn hub search --tags cli", "tkn hub upgrade task mytask --to version -n mynamespace" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/pipelines-cli-tkn
Chapter 2. CephFS through NFS installation
Chapter 2. CephFS through NFS installation 2.1. CephFS with NFS-Ganesha deployment A typical Ceph file system (CephFS) through NFS installation in a Red Hat OpenStack Platform (RHOSP) environment includes the following configurations: OpenStack Controller nodes running containerized Ceph metadata server (MDS), Ceph monitor (MON), manila, and NFS-Ganesha services. Some of these services can coexist on the same node or can have one or more dedicated nodes. Ceph storage cluster with containerized object storage daemons (OSDs) running on Ceph storage nodes. An isolated StorageNFS network that provides access from projects to the NFS-Ganesha services for NFS share provisioning. The Shared File Systems service (manila) provides APIs that allow the projects to request file system shares, which are fulfilled by driver modules. The driver for Red Hat CephFS, manila.share.drivers.cephfs.driver.CephFSDriver , means that you can use the Shared File Systems service as a CephFS as a back end. RHOSP director configures the driver to deploy the NFS-Ganesha gateway so that the CephFS shares are presented through the NFS 4.1 protocol. Using RHOSP director to deploy the Shared File Systems service with a CephFS back end on the overcloud automatically creates the required storage network defined in the heat template. For more information about network planning, see Overcloud networks in the Director Installation and Usage guide. Although you can manually configure the Shared File Systems service by editing its node /etc/manila/manila.conf file, RHOSP director can override any settings in future overcloud updates. The recommended method for configuring a Shared File System back end is through director. Note Currently, you can define only one CephFS back end at a time in director. CephFS through NFS 2.1.1. Requirements for CephFS through NFS CephFS through NFS requires a Red Hat OpenStack Platform (RHOSP) version 13 or later environment, which can be an existing or a new environment. For RHOSP versions 13, 14, and 15, CephFS works with Red Hat Ceph Storage (RHCS) version 3. For RHOSP version 16 or later, CephFS works with Red Hat Ceph Storage (RHCS) version 4.1 or later. For more information, see the Deploying an Overcloud with Containerized Red Hat Ceph Guide . Prerequisites You install the Shared File Systems service on Controller nodes, as is the default behavior. You install the NFS-Ganesha gateway service on Pacemaker cluster of the Controller node. You configure only a single instance of a CephFS back end to use the Shared File Systems service. You can use other non-CephFS back ends with the single CephFS back end. You use RHOSP director to create an extra network (StorageNFS) for the storage traffic. You configure a new RHCS version 4.1 or later cluster at the same time as CephFS through NFS. 2.1.2. File shares File shares are handled differently in the Shared File Systems service (manila), Ceph File System (CephFS), and Ceph through NFS. The Shared File Systems service provides shares. A share is an individual file system namespace and a unit of storage or sharing and a defined size for example, subdirectories with quotas. Shared file system storage enables multiple clients because the file system is configured before access is requested unlike block storage, which is configured when it is requested. With CephFS, a share is considered a directory with a defined quota and a layout that points to a particular storage pool or namespace. CephFS quotas limit the size of a directory to the size share that the Shared File Systems service creates. Access to Ceph shares is determined by MDS authentication capabilities. With CephFS through NFS, file shares are provisioned and accessed through the NFS protocol. The NFS protocol also handles security. 2.1.3. Isolated network used by CephFS through NFS CephFS through NFS deployments use an extra isolated network, StorageNFS. This network is deployed so users can mount shares over NFS on that network without accessing the Storage or Storage Management networks which are reserved for infrastructure traffic. For more information about isolating networks, see Basic network isolation in the Advanced Overcloud Customization guide. 2.2. Installing Red Hat OpenStack Platform with CephFS through NFS and a custom network_data file To install CephFS through NFS, complete the following procedures: Install the ceph-ansible package. See Section 2.2.1, "Installing the ceph-ansible package" Prepare the overcloud container images with the openstack overcloud image prepare command. See Section 2.2.2, "Preparing overcloud container images" Generate the custom roles file, roles_data.yaml , and network_data.yaml file. See Section 2.2.2.1, "Generating the custom roles file" Deploy Ceph, Shared File Systems service (manila), and CephFS using the openstack overcloud deploy command with custom roles and environments. See Section 2.2.3, "Deploying the updated environment" Configure the isolated StorageNFS network and create the default share type. See Section 2.2.4, "Completing post-deployment configuration" Examples use the standard stack user in the Red Hat Platform (RHOSP) environment. Perform tasks as part of a RHOSP installation or environment update. 2.2.1. Installing the ceph-ansible package Install the ceph-ansible package to be installed on an undercloud node to deploy containerized Ceph. Procedure Log in to an undercloud node as the stack user. Install the ceph-ansible package: 2.2.2. Preparing overcloud container images Because all services are containerized in Red Hat OpenStack Platform (RHOSP), you must prepare container images for the overcloud by using the openstack overcloud image prepare command. Enter this command with the additional options to add default images for the ceph and manila services to the container registry. Ceph MDS and NFS-Ganesha services use the same Ceph base container image. For more information about container images, see Container Images for Additional Services in the Director Installation and Usage guide. Procedure From the undercloud as the stack user, enter the openstack overcloud image prepare command with -e to include the following environment files: Use grep to verify that the default images for the ceph and manila services are available in the containers-default-parameters.yaml file. 2.2.2.1. Generating the custom roles file The ControllerStorageNFS custom role configures the isolated StorageNFS network. This role is similar to the default Controller.yaml role file with the addition of the StorageNFS network and the CephNfs service, indicated by the OS::TripleO::Services:CephNfs command. For more information about the openstack overcloud roles generate command, see Roles in the Advanced Overcloud Customization guide. The openstack overcloud roles generate command creates a custom roles_data.yaml file including the services specified after -o . In the following example, the roles_data.yaml file created has the services for ControllerStorageNfs , Compute , and CephStorage . Note If you have an existing roles_data.yaml file, modify it to add ControllerStorageNfs , Compute , and CephStorage services to the configuration file. For more information, see Roles in the Advanced Overcloud Customization guide. Procedure Log in to an undercloud node as the stack user, Use the openstack overcloud roles generate command to create the roles_data.yaml file: 2.2.3. Deploying the updated environment When you are ready to deploy your environment, use the openstack overcloud deploy command with the custom environments and roles required to run CephFS with NFS-Ganesha. The overcloud deploy command has the following options in addition to other required options. Action Option Additional information Add the updated default containers from the overcloud container image prepare command. -e /home/stack/containers-default-parameters.yaml` Section 2.2.2, "Preparing overcloud container images" Add the extra StorageNFS network with network_data_ganesha.yaml . -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml` Section 2.2.3.1, "StorageNFS and network_data_ganesha.yaml file" Add the custom roles defined in roles_data.yaml file from the section. -r /home/stack/roles_data.yaml . Section 2.2.2.1, "Generating the custom roles file" Deploy the Ceph daemons with ceph-ansible.yaml . -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml Initiating Overcloud Deployment in the Deploying an Overcloud with Containzerized Red Hat Ceph guide Deploy the Ceph metadata server with ceph-mds.yaml . -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml . Initiating Overcloud Deployment in the Deploying an Overcloud with Containzerized Red Hat Ceph guide Deploy the manila service with the CephFS through NFS back end. Configure NFS-Ganesha with director. -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml Section 2.2.3.2, " manila-cephfsganesha-config.yaml " The following example shows an openstack overcloud deploy command with options to deploy CephFS through NFS-Ganesha, Ceph cluster, Ceph MDS, and the isolated StorageNFS network: For more information about the openstack overcloud deploy command, see Deployment command in the Director Installation and Usage guide. 2.2.3.1. StorageNFS and network_data_ganesha.yaml file Use composable networks to define custom networks and assign them to any role. Instead of using the standard network_data.yaml file, you can configure the StorageNFS composable network with the network_data_ganesha.yaml file. Both of these roles are available in the /usr/share/openstack-tripleo-heat-templates directory. The network_data_ganesha.yaml file contains an additional section that defines the isolated StorageNFS network. Although the default settings work for most installations, you must edit the YAML file to add your network settings, including the VLAN ID, subnet, and other settings. For more information about composable networks, see Using Composable Networks in the Advanced Overcloud Customization guide. 2.2.3.2. manila-cephfsganesha-config.yaml The integrated environment file for defining a CephFS back end is located in the following path of an undercloud node: The manila-cephfsganesha-config.yaml environment file contains settings relevant to the deployment of the Shared File Systems service. The back end default settings work for most environments. The following example shows the default values that director uses during deployment of the Shared File Systems service: The parameter_defaults header signifies the start of the configuration. In this section, you can edit settings to override default values set in resource_registry . This includes values set by OS::Tripleo::Services::ManilaBackendCephFs , which sets defaults for a CephFS back end. 1 ManilaCephFSBackendName sets the name of the manila configuration of your CephFS backend. In this case, the default back end name is cephfs . 2 ManilaCephFSDriverHandlesShareServers controls the lifecycle of the share server. When set to false , the driver does not handle the lifecycle. This is the only supported option. 3 ManilaCephFSCephFSAuthId defines the Ceph auth ID that the director creates for the manila service to access the Ceph cluster. 4 ManilaCephFSCephFSEnableSnapshots controls snapshot activation. The false value indicates that snapshots are not enabled. This feature is currently not supported. For more information about environment files, refer to the Environment Files section in the Director Installation and Usage Guide . 2.2.4. Completing post-deployment configuration You must complete two post-deployment configuration tasks before you create NFS shares, grant user access, and mount NFS shares. Map the neutron StorageNFS network to the isolated data center Storage NFS network. See Section 2.2.4.1, "Configuring the isolated network" Create the default share type. See Section 2.2.4.3, "Configuring a default share type" 2.2.4.1. Configuring the isolated network Map the new isolated StorageNFS network to a neutron-shared provider network. The Compute VMs attach to this neutron network to access share export locations provided by the NFS-Ganesha gateway. For more information about network security with the Shared File Systems service, see Hardening the Shared File System Service in the Security and Hardening Guide . The openstack network create command defines the configuration for the StorageNFS neutron network. You can enter this command with the following options: For --provider-network-type , use the value vlan . For --provider-physical-network , use the default value datacentre , unless you set another tag for the br-isolated bridge through NeutronBridgeMappings in your tripleo-heat-templates. For --provider-segment , use the VLAN value set for the StorageNFS isolated network in the heat template, /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml . This value is 70, unless the deployer modified the isolated network definitions. Procedure On an undercloud node as the stack user, enter the following command: On an undercloud node, enter the openstack network create command to create the StorageNFS network: 2.2.4.2. Configuring the shared provider StorageNFS network Create a corresponding StorageNFSSubnet on the neutron-shared provider network. Ensure that the subnet is the same as the storage_nfs network definition in the network_data.yml file and ensure that the allocation range for the StorageNFS subnet and the corresponding undercloud subnet do not overlap. No gateway is required because the StorageNFS subnet is dedicated to serving NFS shares. Prerequisites The start and ending IP range for the allocation pool. The subnet IP range. 2.2.4.2.1. Configuring the shared provider StorageNFS IPv4 network Procedure Log in to an overcloud node. Source your overcloud credentials. Use the example command to provision the network and make the following updates: Replace the start=172.16.4.150,end=172.16.4.250 IP values with the IP values for your network. Replace the 172.16.4.0/24 subnet range with the subnet range for your network. 2.2.4.2.2. Configuring the shared provider StorageNFS IPv6 network This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Procedure Log in to an overcloud node. Use the sample command to provision the network, updating values as needed. Replace the fd00:fd00:fd00:7000::/64 subnet range with the subnet range for your network. 2.2.4.3. Configuring a default share type You can use the Shared File Systems service to define share types that you can use to create shares with specific settings. Share types work like Block Storage volume types. Each type has associated settings, for example, extra specifications. When you invoke the type during share creation the settings apply to the shared file system. Red Hat OpenStack Platform (RHOSP) director expects a default share type. You must create the default share type before you open the cloud for users to access. For CephFS with NFS, use the manila type-create command: For information about share types, see Creating and managing shares in the Storage Guide .
[ "[stack@undercloud-0 ~]USD sudo dnf install -y ceph-ansible [stack@undercloud-0 ~]USD sudo dnf list ceph-ansible Installed Packages ceph-ansible.noarch 3.1.0-0.1.el7", "openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/manila.yaml", "[stack@undercloud-0 ~]USD grep -E 'ceph|manila' composable_roles/docker-images.yaml DockerCephDaemonImage: 192.168.24.1:8787/rhceph-beta/rhceph-4-rhel8:4-12 DockerManilaApiImage: 192.168.24.1:8787/rhosp-rhel8/openstack-manila-api:2019-01-16 DockerManilaConfigImage: 192.168.24.1:8787/rhosp-rhel8/openstack-manila-api:2019-01-16 DockerManilaSchedulerImage: 192.168.24.1:8787/rhosp-rhel8/openstack-manila-scheduler:2019-01-16 DockerManilaShareImage: 192.168.24.1:8787/rhosp-rhel8/openstack-manila-share:2019-01-16", "[stack@undercloud ~]USD cd /usr/share/openstack-tripleo-heat-templates/roles [stack@undercloud roles]USD diff Controller.yaml ControllerStorageNfs.yaml 16a17 > - StorageNFS 50a45 > - OS::TripleO::Services::CephNfs", "[stack@undercloud ~]USD openstack overcloud roles generate --roles-path /usr/share/openstack-tripleo-heat-templates/roles -o /home/stack/roles_data.yaml ControllerStorageNfs Compute CephStorage", "[stack@undercloud ~]USD openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml -r /home/stack/roles_data.yaml -e /home/stack/containers-default-parameters.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/network-environment.yaml -e/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml", "name: StorageNFS enabled: true vip: true name_lower: storage_nfs vlan: 70 ip_subnet: '172.16.4.0/24' allocation_pools: [{'start': '172.16.4.4', 'end': '172.16.4.149'}] ipv6_subnet: 'fd00:fd00:fd00:7000::/64' ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::10', 'end': 'fd00:fd00:fd00:7000:ffff:ffff:ffff:fffe'}]", "/usr/share/openstack-tripleo-heat-templates/environments/", "[stack@undercloud ~]USD cat /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml A Heat environment file which can be used to enable a a Manila CephFS-NFS driver backend. resource_registry: OS::TripleO::Services::ManilaApi: ../deployment/manila/manila-api-container-puppet.yaml OS::TripleO::Services::ManilaScheduler: ../deployment/manila/manila-scheduler-container-puppet.yaml # Only manila-share is pacemaker managed: OS::TripleO::Services::ManilaShare: ../deployment/manila/manila-share-pacemaker-puppet.yaml OS::TripleO::Services::ManilaBackendCephFs: ../deployment/manila/manila-backend-cephfs.yaml # ceph-nfs (ganesha) service is installed and configured by ceph-ansible # but it's still managed by pacemaker OS::TripleO::Services::CephNfs: ../deployment/ceph-ansible/ceph-nfs.yaml parameter_defaults: ManilaCephFSBackendName: cephfs 1 ManilaCephFSDriverHandlesShareServers: false 2 ManilaCephFSCephFSAuthId: 'manila' 3 ManilaCephFSCephFSEnableSnapshots: false 4 # manila cephfs driver supports either native cephfs backend - 'CEPHFS' # (users mount shares directly from ceph cluster), or nfs-ganesha backend - # 'NFS' (users mount shares through nfs-ganesha server) ManilaCephFSCephFSProtocolHelperType: 'NFS'", "[stack@undercloud ~]USD source ~/overcloudrc", "(overcloud) [stack@undercloud-0 ~]USD openstack network create StorageNFS --share --provider-network-type vlan --provider-physical-network datacentre --provider-segment 70", "[stack@undercloud-0 ~]USD openstack subnet create --allocation-pool start=172.16.4.150,end=172.16.4.250 --dhcp --network StorageNFS --subnet-range 172.16.4.0/24 --gateway none StorageNFSSubnet", "[stack@undercloud-0 ~]USD openstack subnet create --ip-version 6 --dhcp --network StorageNFS --subnet-range fd00:fd00:fd00:7000::/64 --gateway none --ipv6-ra-mode dhcpv6-stateful --ipv6-address-mode dhcpv6-stateful StorageNFSSubnet -f yaml", "manila type-create default false" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_the_shared_file_systems_service_with_cephfs_through_nfs/assembly_cephfs-install
Chapter 1. Updating clusters overview
Chapter 1. Updating clusters overview You can update an OpenShift Container Platform 4 cluster with a single operation by using the web console or the OpenShift CLI ( oc ). 1.1. Understanding OpenShift Container Platform updates About the OpenShift Update Service : For clusters with internet access, Red Hat provides over-the-air updates by using an OpenShift Container Platform update service as a hosted service located behind public APIs. 1.2. Understanding update channels and releases Update channels and releases : With update channels, you can choose an update strategy. Update channels are specific to a minor version of OpenShift Container Platform. Update channels only control release selection and do not impact the version of the cluster that you install. The openshift-install binary file for a specific version of the OpenShift Container Platform always installs that minor version. For more information, see the following: Upgrading version paths Understanding fast and stable channel use and strategies Understanding restricted network clusters Switching between channels Understanding conditional updates 1.3. Understanding cluster Operator condition types The status of cluster Operators includes their condition type, which informs you of the current state of your Operator's health. The following definitions cover a list of some common ClusterOperator condition types. Operators that have additional condition types and use Operator-specific language have been omitted. The Cluster Version Operator (CVO) is responsible for collecting the status conditions from cluster Operators so that cluster administrators can better understand the state of the OpenShift Container Platform cluster. Available: The condition type Available indicates that an Operator is functional and available in the cluster. If the status is False , at least one part of the operand is non-functional and the condition requires an administrator to intervene. Progressing: The condition type Progressing indicates that an Operator is actively rolling out new code, propagating configuration changes, or otherwise moving from one steady state to another. Operators do not report the condition type Progressing as True when they are reconciling a known state. If the observed cluster state has changed and the Operator is reacting to it, then the status reports back as True , since it is moving from one steady state to another. Degraded: The condition type Degraded indicates that an Operator has a current state that does not match its required state over a period of time. The period of time can vary by component, but a Degraded status represents persistent observation of an Operator's condition. As a result, an Operator does not fluctuate in and out of the Degraded state. There might be a different condition type if the transition from one state to another does not persist over a long enough period to report Degraded . An Operator does not report Degraded during the course of a normal update. An Operator may report Degraded in response to a persistent infrastructure failure that requires eventual administrator intervention. Note This condition type is only an indication that something may need investigation and adjustment. As long as the Operator is available, the Degraded condition does not cause user workload failure or application downtime. Upgradeable: The condition type Upgradeable indicates whether the Operator is safe to update based on the current cluster state. The message field contains a human-readable description of what the administrator needs to do for the cluster to successfully update. The CVO allows updates when this condition is True , Unknown or missing. When the Upgradeable status is False , only minor updates are impacted, and the CVO prevents the cluster from performing impacted updates unless forced. 1.4. Understanding cluster version condition types The Cluster Version Operator (CVO) monitors cluster Operators and other components, and is responsible for collecting the status of both the cluster version and its Operators. This status includes the condition type, which informs you of the health and current state of the OpenShift Container Platform cluster. In addition to Available , Progressing , and Upgradeable , there are condition types that affect cluster versions and Operators. Failing: The cluster version condition type Failing indicates that a cluster cannot reach its desired state, is unhealthy, and requires an administrator to intervene. Invalid: The cluster version condition type Invalid indicates that the cluster version has an error that prevents the server from taking action. The CVO only reconciles the current state as long as this condition is set. RetrievedUpdates: The cluster version condition type RetrievedUpdates indicates whether or not available updates have been retrieved from the upstream update server. The condition is Unknown before retrieval, False if the updates either recently failed or could not be retrieved, or True if the availableUpdates field is both recent and accurate. ReleaseAccepted: The cluster version condition type ReleaseAccepted with a True status indicates that the requested release payload was successfully loaded without failure during image verification and precondition checking. ImplicitlyEnabledCapabilities: The cluster version condition type ImplicitlyEnabledCapabilities with a True status indicates that there are enabled capabilities that the user is not currently requesting through spec.capabilities . The CVO does not support disabling capabilities if any associated resources were previously managed by the CVO. 1.5. Preparing to perform an EUS-to-EUS update Preparing to perform an EUS-to-EUS update : Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform 4.9 to 4.10, and then to 4.11. You cannot update from OpenShift Container Platform 4.8 to 4.10 directly. However, if you want to update between two Extended Update Support (EUS) versions, you can do so by incurring only a single reboot of non-control plane hosts. For more information, see the following: Updating EUS-to-EUS 1.6. Updating a cluster using the web console Updating a cluster using the web console : You can update an OpenShift Container Platform cluster by using the web console. The following steps update a cluster within a minor version. You can use the same instructions for updating a cluster between minor versions. Performing a canary rollout update Pausing a MachineHealthCheck resource About updating OpenShift Container Platform on a single-node cluster Updating a cluster by using the web console Changing the update server by using the web console 1.7. Updating a cluster using the CLI Updating a cluster using the CLI : You can update an OpenShift Container Platform cluster within a minor version by using the OpenShift CLI ( oc ). The following steps update a cluster within a minor version. You can use the same instructions for updating a cluster between minor versions. Pausing a MachineHealthCheck resource About updating OpenShift Container Platform on a single-node cluster Updating a cluster by using the CLI Changing the update server by using the CLI 1.8. Performing a canary rollout update Performing a canary rollout update : By controlling the rollout of an update to the worker nodes, you can ensure that mission-critical applications stay available during the whole update, even if the update process causes your applications to fail. Depending on your organizational needs, you might want to update a small subset of worker nodes, evaluate cluster and workload health over a period of time, and then update the remaining nodes. This is referred to as a canary update. Alternatively, you might also want to fit worker node updates, which often requires a host reboot, into smaller defined maintenance windows when it is not possible to take a large maintenance window to update the entire cluster at one time. You can perform the following procedures: Creating machine configuration pools to perform a canary rollout update Pausing the machine configuration pools Performing the cluster update Unpausing the machine configuration pools Moving a node to the original machine configuration pool 1.9. Updating a cluster that includes RHEL compute machines Updating a cluster that includes RHEL compute machines : If your cluster contains Red Hat Enterprise Linux (RHEL) machines, you must perform additional steps to update those machines. You can perform the following procedures: Updating a cluster by using the web console Optional: Adding hooks to perform Ansible tasks on RHEL machines Updating RHEL compute machines in your cluster 1.10. Updating a cluster in a disconnected environment About cluster updates in a disconnected environment : If your mirror host cannot access both the internet and the cluster, you can mirror the images to a file system that is disconnected from that environment. You can then bring that host or removable media across that gap. If the local container registry and the cluster are connected to the mirror host of a registry, you can directly push the release images to the local registry. Preparing your mirror host Configuring credentials that allow images to be mirrored Mirroring the OpenShift Container Platform image repository Updating the disconnected cluster Configuring image registry repository mirroring Widening the scope of the mirror image catalog to reduce the frequency of cluster node reboots Installing the OpenShift Update Service Operator Creating an OpenShift Update Service application Deleting an OpenShift Update Service application Uninstalling the OpenShift Update Service Operator 1.11. Updating hardware on nodes running in vSphere Updating hardware on vSphere : You must ensure that your nodes running in vSphere are running on the hardware version supported by OpenShift Container Platform. Currently, hardware version 15 or later is supported for vSphere virtual machines in a cluster. For more information, see the following: Updating virtual hardware on vSphere Scheduling an update for virtual hardware on vSphere 1.12. Updating a cluster that includes the Special Resource Operator Updating a cluster that includes the Special Resource Operator : When updating a cluster that includes the Special Resource Operator (SRO), it is important to consider whether the new kernel module version is compatible with the kernel modules currently loaded by the SRO. You can run a preflight check to confirm if the SRO will be able to upgrade the kernel modules. Important Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. This version is still fully supported, but support will be removed in a future version of OpenShift Container Platform. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/updating_clusters/updating-clusters-overview
probe::socket.aio_write
probe::socket.aio_write Name probe::socket.aio_write - Message send via sock_aio_write Synopsis Values protocol Protocol value flags Socket flags value name Name of this probe state Socket state value size Message size in bytes type Socket type value family Protocol family value Context The message sender Description Fires at the beginning of sending a message on a socket via the sock_aio_write function
[ "socket.aio_write" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-socket-aio-write
5.8. Troubleshooting Cross-forest Trusts
5.8. Troubleshooting Cross-forest Trusts This section provides information about possible problems in an cross-forest trust environment and ways to solve them. 5.8.1. Troubleshooting the ipa-extdom Plug-in IdM clients in an IdM domain with a trust to Active Directory (AD) cannot receive information about users and groups from AD directly. Additionally, IdM does not store information about AD users in Directory Server running on IdM masters. Instead, IdM servers use the ipa-extdom to receive information about AD users and groups and forwards them to the requesting client. Setting the Config Timeout of the ipa-extdom Plug-in The ipa-extdom plug-in sends a request to SSSD for the data about AD users. However, not all requested data might be already in the cache of SSSD. In this case, SSSD requests the data from the AD domain controller (DC). This can be time-consuming for certain operations. The config timeout value defines the time in milliseconds of how long the ipa-extdom plug-in waits for a reply of SSSD before the plug-in cancels the connection and returns a timeout error to the caller. By default, the config timeout is 10000 milliseconds (10 seconds). If you set a too small value, such as 500 milliseconds, SSSD might not have enough time to reply and requests will always return a timeout. If the value is too large, such as 30000 milliseconds (30 seconds), a single request might block the connection to SSSD for this amount of time. Since only one thread can connect to SSSD at a time, all other requests from the plug-in have to wait. If there are many requests sent by IdM clients, they can block all available workers configured for Directory Server and, as a consequence, the server might not be able to reply to any kind of request for some time. Change the config timeout in the following situations: If IdM clients frequently receive timeout errors before their own search timeout is reached when requesting information about AD users and groups, the config timeout value is too small. If the Directory Server on the IdM server is often locked and the pstack utility reports that many or all worker threads are handling ipa-extdom requests at this time, the value is too large. For example, to set the config value to 20000 milliseconds (20 seconds), enter: Setting the Maximum Size of the ipa-extdom Plug-in Buffer Used for NSS Calls The ipa-extdom plug-in uses calls which use the same API as typical name service switch (NSS) calls to request data from SSSD. Those calls use a buffer where SSSD can store the requested data. If the buffer is too small, SSSD returns an ERANGE error and the plug-in retries the request with a larger buffer. The ipaExtdomMaxNssBufSize attribute in the cn=ipa_extdom_extop,cn=plugins,cn=config entry of Directory Server on the IdM master defines the maximum size of the buffer in bytes. By default, the buffer is 134217728 bytes (128 MB). Only increase the value if, for example, a group has so many members that all names do not fit into the buffer and the IPA client cannot resolve the group. For example, to set the buffer to 268435456 bytes (256 MB), enter:
[ "ldapmodify -D \"cn=directory manager\" -W dn: cn=ipa_extdom_extop,cn=plugins,cn=config changetype: modify replace: ipaExtdomMaxNssTimeout ipaExtdomMaxNssTimeout: 20000", "ldapmodify -D \"cn=directory manager\" -W dn: cn=ipa_extdom_extop,cn=plugins,cn=config changetype: modify replace: ipaExtdomMaxNssBufSize ipaExtdomMaxNssBufSize: 268435456" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/troubleshooting-cross-forest-trusts
10.5.15. SuexecUserGroup
10.5.15. SuexecUserGroup The SuexecUserGroup directive, which originates from the mod_suexec module, allows the specification of user and group execution privileges for CGI programs. Non-CGI requests are still processed with the user and group specified in the User and Group directives. Note The SuexecUserGroup directive replaces the Apache HTTP Server 1.3 configuration of using the User and Group directives inside the configuration of VirtualHosts sections.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-suexecusergroup
Chapter 7. Advanced VM creation
Chapter 7. Advanced VM creation 7.1. Creating VMs in the web console 7.1.1. Creating virtual machines from Red Hat images overview Red Hat images are golden images . They are published as container disks in a secure registry. The Containerized Data Importer (CDI) polls and imports the container disks into your cluster and stores them in the openshift-virtualization-os-images project as snapshots or persistent volume claims (PVCs). You can optionally use a custom namespace for golden images. Red Hat images are automatically updated. You can disable and re-enable automatic updates for these images. See Managing Red Hat boot source updates . Cluster administrators can enable automatic subscription for Red Hat Enterprise Linux (RHEL) virtual machines in the OpenShift Virtualization web console . You can create virtual machines (VMs) from operating system images provided by Red Hat by using one of the following methods: Creating a VM from a template by using the web console Creating a VM from an instance type by using the web console Creating a VM from a VirtualMachine manifest by using the command line Important Do not create VMs in the default openshift-* namespaces. Instead, create a new namespace or use an existing namespace without the openshift prefix. 7.1.1.1. About golden images A golden image is a preconfigured snapshot of a virtual machine (VM) that you can use as a resource to deploy new VMs. For example, you can use golden images to provision the same system environment consistently and deploy systems more quickly and efficiently. 7.1.1.1.1. How do golden images work? Golden images are created by installing and configuring an operating system and software applications on a reference machine or virtual machine. This includes setting up the system, installing required drivers, applying patches and updates, and configuring specific options and preferences. After the golden image is created, it is saved as a template or image file that can be replicated and deployed across multiple clusters. The golden image can be updated by its maintainer periodically to incorporate necessary software updates and patches, ensuring that the image remains up to date and secure, and newly created VMs are based on this updated image. 7.1.1.1.2. Red Hat implementation of golden images Red Hat publishes golden images as container disks in the registry for versions of Red Hat Enterprise Linux (RHEL). Container disks are virtual machine images that are stored as a container image in a container image registry. Any published image will automatically be made available in connected clusters after the installation of OpenShift Virtualization. After the images are available in a cluster, they are ready to use to create VMs. 7.1.1.2. About VM boot sources Virtual machines (VMs) consist of a VM definition and one or more disks that are backed by data volumes. VM templates enable you to create VMs using predefined specifications. Every template requires a boot source, which is a fully configured disk image including configured drivers. Each template contains a VM definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source. Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) and volume snapshots are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing boot sources in the cluster namespace that are configured with the default storage class. 7.1.1.3. Configuring a custom namespace for golden images The default namespace for golden images is openshift-virtualization-os-images , but you can configure a custom namespace to restrict user access to the default boot sources. 7.1.1.3.1. Configuring a custom namespace for golden images by using the web console You can configure a custom namespace for golden images in your cluster by using the Red Hat OpenShift Service on AWS web console. Procedure In the web console, select Virtualization Overview . Select the Settings tab. On the Cluster tab, select General settings Bootable volumes project . Select a namespace to use for golden images. If you already created a namespace, select it from the Project list. If you did not create a namespace, scroll to the bottom of the list and click Create project . Enter a name for your new namespace in the Name field of the Create project dialog. Click Create . 7.1.1.3.2. Configuring a custom namespace for golden images by using the CLI You can configure a custom namespace for golden images in your cluster by setting the spec.commonBootImageNamespace field in the HyperConverged custom resource (CR). Prerequisites You installed the OpenShift CLI ( oc ). You created a namespace to use for golden images. Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Configure the custom namespace by updating the value of the spec.commonBootImageNamespace field: Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: commonBootImageNamespace: <custom_namespace> 1 # ... 1 The namespace to use for golden images. Save your changes and exit the editor. 7.1.2. Creating VMs by importing images from web pages You can create virtual machines (VMs) by importing operating system images from web pages. Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. 7.1.2.1. Creating a VM from an image on a web page by using the web console You can create a virtual machine (VM) by importing an image from a web page by using the Red Hat OpenShift Service on AWS web console. Prerequisites You must have access to the web page that contains the image. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select URL (creates PVC) from the Disk source list. Enter the image URL. Example: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software Set the disk size. Click . Click Create VirtualMachine . 7.1.2.2. Creating a VM from an image on a web page by using the command line You can create a virtual machine (VM) from an image on a web page by using the command line. When the virtual machine (VM) is created, the data volume with the image is imported into persistent storage. Prerequisites You must have access credentials for the web page that contains the image. Procedure Edit the VirtualMachine manifest and save it as a vm-rhel-datavolume.yaml file: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null name: vm-rhel-datavolume 1 labels: kubevirt.io/vm: vm-rhel-datavolume spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: rhel-dv 2 spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 10Gi 3 instancetype: name: u1.small 4 preference: inferFromVolume: datavolumedisk1 runStrategy: Always template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-rhel-datavolume spec: domain: devices: {} resources: {} terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: rhel-dv name: datavolumedisk1 status: {} 1 Specify the name of the VM. 2 Specify the name of the data volume. 3 Specify the size of the storage requested for the data volume. 4 Optional: Specify the instance type to use to control resource sizing of the VM. Create the VM by running the following command: USD oc create -f vm-rhel-datavolume.yaml The oc create command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded . You can start the VM. Data volume provisioning happens in the background, so there is no need to monitor the process. Verification The importer pod downloads the image from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command: USD oc get pods Monitor the data volume until its status is Succeeded by running the following command: USD oc describe dv rhel-dv 1 1 Specify the data volume name that you defined in the VirtualMachine manifest. Verify that provisioning is complete and that the VM has started by accessing its serial console: USD virtctl console vm-rhel-datavolume 7.1.3. Creating VMs by uploading images You can create virtual machines (VMs) by uploading operating system images from your local machine. You can create a Windows VM by uploading a Windows image to a PVC. Then you clone the PVC when you create the VM. Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. You must also install VirtIO drivers on Windows VMs. 7.1.3.1. Creating a VM from an uploaded image by using the web console You can create a virtual machine (VM) from an uploaded operating system image by using the Red Hat OpenShift Service on AWS web console. Prerequisites You must have an IMG , ISO , or QCOW2 image file. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select Upload (Upload a new file to a PVC) from the Disk source list. Browse to the image on your local machine and set the disk size. Click Customize VirtualMachine . Click Create VirtualMachine . 7.1.3.1.1. Generalizing a VM image You can generalize a Red Hat Enterprise Linux (RHEL) image to remove all system-specific configuration data before you use the image to create a golden image, a preconfigured snapshot of a virtual machine (VM). You can use a golden image to deploy new VMs. You can generalize a RHEL VM by using the virtctl , guestfs , and virt-sysprep tools. Prerequisites You have a RHEL virtual machine (VM) to use as a base VM. You have installed the OpenShift CLI ( oc ). You have installed the virtctl tool. Procedure Stop the RHEL VM if it is running, by entering the following command: USD virtctl stop <my_vm_name> Optional: Clone the virtual machine to avoid losing the data from your original VM. You can then generalize the cloned VM. Retrieve the dataVolume that stores the root filesystem for the VM by running the following command: USD oc get vm <my_vm_name> -o jsonpath="{.spec.template.spec.volumes}{'\n'}" Example output [{"dataVolume":{"name":"<my_vm_volume>"},"name":"rootdisk"},{"cloudInitNoCloud":{...}] Retrieve the persistent volume claim (PVC) that matches the listed dataVolume by running the followimg command: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE <my_vm_volume> Bound ... Note If your cluster configuration does not enable you to clone a VM, to avoid losing the data from your original VM, you can clone the VM PVC to a data volume instead. You can then use the cloned PVC to create a golden image. If you are creating a golden image by cloning a PVC, continue with the steps, using the cloned PVC. Deploy a new interactive container with libguestfs-tools and attach the PVC to it by running the following command: USD virtctl guestfs <my-vm-volume> --uid 107 This command opens a shell for you to run the command. Remove all configurations specific to your system by running the following command: USD virt-sysprep -a disk.img In the Red Hat OpenShift Service on AWS console, click Virtualization Catalog . Click Add volume . In the Add volume window: From the Source type list, select Use existing Volume . From the Volume project list, select your project. From the Volume name list, select the correct PVC. In the Volume name field, enter a name for the new golden image. From the Preference list, select the RHEL version you are using. From the Default Instance Type list, select the instance type with the correct CPU and memory requirements for the version of RHEL you selected previously. Click Save . The new volume appears in the Select volume to boot from list. This is your new golden image. You can use this volume to create new VMs. Additional resources for generalizing VMs Cloning VMs Cloning a PVC to a data volume 7.1.3.2. Creating a Windows VM You can create a Windows virtual machine (VM) by uploading a Windows image to a persistent volume claim (PVC) and then cloning the PVC when you create a VM by using the Red Hat OpenShift Service on AWS web console. Prerequisites You created a Windows installation DVD or USB with the Windows Media Creation Tool. See Create Windows 10 installation media in the Microsoft documentation. You created an autounattend.xml answer file. See Answer files (unattend.xml) in the Microsoft documentation. Procedure Upload the Windows image as a new PVC: Navigate to Storage PersistentVolumeClaims in the web console. Click Create PersistentVolumeClaim With Data upload form . Browse to the Windows image and select it. Enter the PVC name, select the storage class and size and then click Upload . The Windows image is uploaded to a PVC. Configure a new VM by cloning the uploaded PVC: Navigate to Virtualization Catalog . Select a Windows template tile and click Customize VirtualMachine . Select Clone (clone PVC) from the Disk source list. Select the PVC project, the Windows image PVC, and the disk size. Apply the answer file to the VM: Click Customize VirtualMachine parameters . On the Sysprep section of the Scripts tab, click Edit . Browse to the autounattend.xml answer file and click Save . Set the run strategy of the VM: Clear Start this VirtualMachine after creation so that the VM does not start immediately. Click Create VirtualMachine . On the YAML tab, replace running:false with runStrategy: RerunOnFailure and click Save . Click the Options menu and select Start . The VM boots from the sysprep disk containing the autounattend.xml answer file. 7.1.3.2.1. Generalizing a Windows VM image You can generalize a Windows operating system image to remove all system-specific configuration data before you use the image to create a new virtual machine (VM). Before generalizing the VM, you must ensure the sysprep tool cannot detect an answer file after the unattended Windows installation. Prerequisites A running Windows VM with the QEMU guest agent installed. Procedure In the Red Hat OpenShift Service on AWS console, click Virtualization VirtualMachines . Select a Windows VM to open the VirtualMachine details page. Click Configuration Disks . Click the Options menu beside the sysprep disk and select Detach . Click Detach . Rename C:\Windows\Panther\unattend.xml to avoid detection by the sysprep tool. Start the sysprep program by running the following command: %WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm After the sysprep tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs. You can now specialize the VM. 7.1.3.2.2. Specializing a Windows VM image Specializing a Windows virtual machine (VM) configures the computer-specific information from a generalized Windows image onto the VM. Prerequisites You must have a generalized Windows disk image. You must create an unattend.xml answer file. See the Microsoft documentation for details. Procedure In the Red Hat OpenShift Service on AWS console, click Virtualization Catalog . Select a Windows template and click Customize VirtualMachine . Select PVC (clone PVC) from the Disk source list. Select the PVC project and PVC name of the generalized Windows image. Click Customize VirtualMachine parameters . Click the Scripts tab. In the Sysprep section, click Edit , browse to the unattend.xml answer file, and click Save . Click Create VirtualMachine . During the initial boot, Windows uses the unattend.xml answer file to specialize the VM. The VM is now ready to use. Additional resources for creating Windows VMs Microsoft, Sysprep (Generalize) a Windows installation Microsoft, generalize Microsoft, specialize 7.1.3.3. Creating a VM from an uploaded image by using the command line You can upload an operating system image by using the virtctl command line tool. You can use an existing data volume or create a new data volume for the image. Prerequisites You must have an ISO , IMG , or QCOW2 operating system image file. For best performance, compress the image file by using the virt-sparsify tool or the xz or gzip utilities. You must have virtctl installed. The client machine must be configured to trust the Red Hat OpenShift Service on AWS router's certificate. Procedure Upload the image by running the virtctl image-upload command: USD virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3 1 The name of the data volume. 2 The size of the data volume. For example: --size=500Mi , --size=1G 3 The file path of the image. Note If you do not want to create a new data volume, omit the --size parameter and include the --no-create flag. When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk. To allow insecure server connections when using HTTPS, use the --insecure parameter. When you use the --insecure flag, the authenticity of the upload endpoint is not verified. Optional. To verify that a data volume was created, view all data volumes by running the following command: USD oc get dvs 7.1.4. Cloning VMs You can clone virtual machines (VMs) or create new VMs from snapshots. Important Cloning a VM with a vTPM device attached to it or creating a new VM from its snapshot is not supported. 7.1.4.1. Cloning a VM by using the web console You can clone an existing VM by using the web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. Click Actions . Select Clone . On the Clone VirtualMachine page, enter the name of the new VM. (Optional) Select the Start cloned VM checkbox to start the cloned VM. Click Clone . 7.1.4.2. Creating a VM from an existing snapshot by using the web console You can create a new VM by copying an existing snapshot. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. Click the Snapshots tab. Click the Options menu for the snapshot you want to copy. Select Create VirtualMachine . Enter the name of the virtual machine. (Optional) Select the Start this VirtualMachine after creation checkbox to start the new virtual machine. Click Create . 7.1.4.3. Additional resources Creating VMs by cloning PVCs 7.2. Creating VMs using the CLI 7.2.1. Creating virtual machines from the command line You can create virtual machines (VMs) from the command line by editing or creating a VirtualMachine manifest. You can simplify VM configuration by using an instance type in your VM manifest. Note You can also create VMs from instance types by using the web console . 7.2.1.1. Creating manifests by using the virtctl tool You can use the virtctl CLI utility to simplify creating manifests for VMs, VM instance types, and VM preferences. For more information, see VM manifest creation commands . 7.2.1.2. Creating a VM from a VirtualMachine manifest You can create a virtual machine (VM) from a VirtualMachine manifest. Procedure Edit the VirtualMachine manifest for your VM. The following example configures a Red Hat Enterprise Linux (RHEL) VM: Note This example manifest does not configure VM authentication. Example manifest for a RHEL VM apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-9-minimal spec: dataVolumeTemplates: - metadata: name: rhel-9-minimal-volume spec: sourceRef: kind: DataSource name: rhel9 1 namespace: openshift-virtualization-os-images 2 storage: {} instancetype: name: u1.medium 3 preference: name: rhel.9 4 runStrategy: Always template: spec: domain: devices: {} volumes: - dataVolume: name: rhel-9-minimal-volume name: rootdisk 1 The rhel9 golden image is used to install RHEL 9 as the guest operating system. 2 Golden images are stored in the openshift-virtualization-os-images namespace. 3 The u1.medium instance type requests 1 vCPU and 4Gi memory for the VM. These resource values cannot be overridden within the VM. 4 The rhel.9 preference specifies additional attributes that support the RHEL 9 guest operating system. Create a virtual machine by using the manifest file: USD oc create -f <vm_manifest_file>.yaml Optional: Start the virtual machine: USD virtctl start <vm_name> -n <namespace> steps Configuring SSH access to virtual machines 7.2.2. Creating VMs by using container disks You can create virtual machines (VMs) by using container disks built from operating system images. You can enable auto updates for your container disks. See Managing automatic boot source updates for details. Important If the container disks are large, the I/O traffic might increase and cause worker nodes to be unavailable. You can prune DeploymentConfig objects to resolve this issue: You create a VM from a container disk by performing the following steps: Build an operating system image into a container disk and upload it to your container registry . If your container registry does not have TLS, configure your environment to disable TLS for your registry . Create a VM with the container disk as the disk source by using the web console or the command line . Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. 7.2.2.1. Building and uploading a container disk You can build a virtual machine (VM) image into a container disk and upload it to a registry. The size of a container disk is limited by the maximum layer size of the registry where the container disk is hosted. Note For Red Hat Quay , you can change the maximum layer size by editing the YAML configuration file that is created when Red Hat Quay is first deployed. Prerequisites You must have podman installed. You must have a QCOW2 or RAW image file. Procedure Create a Dockerfile to build the VM image into a container image. The VM image must be owned by QEMU, which has a UID of 107 , and placed in the /disk/ directory inside the container. Permissions for the /disk/ directory must then be set to 0440 . The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal scratch image in the second stage to store the result: USD cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF 1 Where <vm_image> is the image in either QCOW2 or RAW format. If you use a remote image, replace <vm_image>.qcow2 with the complete URL. Build and tag the container: USD podman build -t <registry>/<container_disk_name>:latest . Push the container image to the registry: USD podman push <registry>/<container_disk_name>:latest 7.2.2.2. Disabling TLS for a container registry You can disable TLS (transport layer security) for one or more container registries by editing the insecureRegistries field of the HyperConverged custom resource. Prerequisites Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add a list of insecure registries to the spec.storageImport.insecureRegistries field. Example HyperConverged custom resource apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - "private-registry-example-1:5000" - "private-registry-example-2:5000" 1 Replace the examples in this list with valid registry hostnames. 7.2.2.3. Creating a VM from a container disk by using the web console You can create a virtual machine (VM) by importing a container disk from a container registry by using the Red Hat OpenShift Service on AWS web console. Prerequisites You must have access to the container registry that contains the container disk. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select Registry (creates PVC) from the Disk source list. Enter the image URL. Example: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 Set the disk size. Click . Click Create VirtualMachine . 7.2.2.4. Creating a VM from a container disk by using the command line You can create a virtual machine (VM) from a container disk by using the command line. When the virtual machine (VM) is created, the data volume with the container disk is imported into persistent storage. Prerequisites You must have access credentials for the container registry that contains the container disk. Procedure Edit the VirtualMachine manifest and save it as a vm-rhel-datavolume.yaml file: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null name: vm-rhel-datavolume 1 labels: kubevirt.io/vm: vm-rhel-datavolume spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: rhel-dv 2 spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 10Gi 3 instancetype: name: u1.small 4 preference: inferFromVolume: datavolumedisk1 runStrategy: Always template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-rhel-datavolume spec: domain: devices: {} resources: {} terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: rhel-dv name: datavolumedisk1 status: {} 1 Specify the name of the VM. 2 Specify the name of the data volume. 3 Specify the size of the storage requested for the data volume. 4 Optional: Specify the instance type to use to control resource sizing of the VM. Create the VM by running the following command: USD oc create -f vm-rhel-datavolume.yaml The oc create command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded . You can start the VM. Data volume provisioning happens in the background, so there is no need to monitor the process. Verification The importer pod downloads the container disk from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command: USD oc get pods Monitor the data volume until its status is Succeeded by running the following command: USD oc describe dv rhel-dv 1 1 Specify the data volume name that you defined in the VirtualMachine manifest. Verify that provisioning is complete and that the VM has started by accessing its serial console: USD virtctl console vm-rhel-datavolume 7.2.3. Creating VMs by cloning PVCs You can create virtual machines (VMs) by cloning existing persistent volume claims (PVCs) with custom images. You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. You clone a PVC by creating a data volume that references a source PVC. 7.2.3.1. About cloning When cloning a data volume, the Containerized Data Importer (CDI) chooses one of the following Container Storage Interface (CSI) clone methods: CSI volume cloning Smart cloning Both CSI volume cloning and smart cloning methods are efficient, but they have certain requirements for use. If the requirements are not met, the CDI uses host-assisted cloning. Host-assisted cloning is the slowest and least efficient method of cloning, but it has fewer requirements than either of the other two cloning methods. 7.2.3.1.1. CSI volume cloning Container Storage Interface (CSI) cloning uses CSI driver features to more efficiently clone a source data volume. CSI volume cloning has the following requirements: The CSI driver that backs the storage class of the persistent volume claim (PVC) must support volume cloning. For provisioners not recognized by the CDI, the corresponding storage profile must have the cloneStrategy set to CSI Volume Cloning. The source and target PVCs must have the same storage class and volume mode. If you create the data volume, you must have permission to create the datavolumes/source resource in the source namespace. The source volume must not be in use. 7.2.3.1.2. Smart cloning When a Container Storage Interface (CSI) plugin with snapshot capabilities is available, the Containerized Data Importer (CDI) creates a persistent volume claim (PVC) from a snapshot, which then allows efficient cloning of additional PVCs. Smart cloning has the following requirements: A snapshot class associated with the storage class must exist. The source and target PVCs must have the same storage class and volume mode. If you create the data volume, you must have permission to create the datavolumes/source resource in the source namespace. The source volume must not be in use. 7.2.3.1.3. Host-assisted cloning When the requirements for neither Container Storage Interface (CSI) volume cloning nor smart cloning have been met, host-assisted cloning is used as a fallback method. Host-assisted cloning is less efficient than either of the two other cloning methods. Host-assisted cloning uses a source pod and a target pod to copy data from the source volume to the target volume. The target persistent volume claim (PVC) is annotated with the fallback reason that explains why host-assisted cloning has been used, and an event is created. Example PVC target annotation apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/cloneFallbackReason: The volume modes of source and target are incompatible cdi.kubevirt.io/clonePhase: Succeeded cdi.kubevirt.io/cloneType: copy Example event NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE test-ns 0s Warning IncompatibleVolumeModes persistentvolumeclaim/test-target The volume modes of source and target are incompatible 7.2.3.2. Creating a VM from a PVC by using the web console You can create a virtual machine (VM) by importing a container disk from a container registry by using the Red Hat OpenShift Service on AWS web console. You can create a virtual machine (VM) by cloning a persistent volume claim (PVC) by using the Red Hat OpenShift Service on AWS web console. Prerequisites You must have access to the container registry that contains the container disk. You must have access to the namespace that contains the source PVC. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select PVC (clone PVC) from the Disk source list. Enter the image URL. Example: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 Select the PVC project and the PVC name. Set the disk size. Click . Click Create VirtualMachine . 7.2.3.3. Creating a VM from a PVC by using the command line You can create a virtual machine (VM) by cloning the persistent volume claim (PVC) of an existing VM by using the command line. You can clone a PVC by using one of the following options: Cloning a PVC to a new data volume. This method creates a data volume whose lifecycle is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC. Cloning a PVC by creating a VirtualMachine manifest with a dataVolumeTemplates stanza. This method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC. 7.2.3.3.1. Cloning a PVC to a data volume You can clone the persistent volume claim (PVC) of an existing virtual machine (VM) disk to a data volume by using the command line. You create a data volume that references the original source PVC. The lifecycle of the new data volume is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC. Cloning between different volume modes is supported for host-assisted cloning, such as cloning from a block persistent volume (PV) to a file system PV, as long as the source and target PVs belong to the kubevirt content type. Prerequisites The VM with the source PVC must be powered down. If you clone a PVC to a different namespace, you must have permissions to create resources in the target namespace. Additional prerequisites for smart-cloning: Your storage provider must support snapshots. The source and target PVCs must have the same storage provider and volume mode. The value of the driver key of the VolumeSnapshotClass object must match the value of the provisioner key of the StorageClass object as shown in the following example: Example VolumeSnapshotClass object kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com # ... Example StorageClass object kind: StorageClass apiVersion: storage.k8s.io/v1 # ... provisioner: openshift-storage.rbd.csi.ceph.com Procedure Create a DataVolume manifest as shown in the following example: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: namespace: "<source_namespace>" 2 name: "<my_vm_disk>" 3 storage: {} 1 Specify the name of the new data volume. 2 Specify the namespace of the source PVC. 3 Specify the name of the source PVC. Create the data volume by running the following command: USD oc create -f <datavolume>.yaml Note Data volumes prevent a VM from starting before the PVC is prepared. You can create a VM that references the new data volume while the PVC is being cloned. 7.2.3.3.2. Creating a VM from a cloned PVC by using a data volume template You can create a virtual machine (VM) that clones the persistent volume claim (PVC) of an existing VM by using a data volume template. This method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC. Prerequisites The VM with the source PVC must be powered down. Procedure Create a VirtualMachine manifest as shown in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: runStrategy: Halted template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: <source_namespace> 2 name: "<source_pvc>" 3 1 Specify the name of the VM. 2 Specify the namespace of the source PVC. 3 Specify the name of the source PVC. Create the virtual machine with the PVC-cloned data volume: USD oc create -f <vm-clone-datavolumetemplate>.yaml
[ "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: commonBootImageNamespace: <custom_namespace> 1", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null name: vm-rhel-datavolume 1 labels: kubevirt.io/vm: vm-rhel-datavolume spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: rhel-dv 2 spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 10Gi 3 instancetype: name: u1.small 4 preference: inferFromVolume: datavolumedisk1 runStrategy: Always template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-rhel-datavolume spec: domain: devices: {} resources: {} terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: rhel-dv name: datavolumedisk1 status: {}", "oc create -f vm-rhel-datavolume.yaml", "oc get pods", "oc describe dv rhel-dv 1", "virtctl console vm-rhel-datavolume", "virtctl stop <my_vm_name>", "oc get vm <my_vm_name> -o jsonpath=\"{.spec.template.spec.volumes}{'\\n'}\"", "[{\"dataVolume\":{\"name\":\"<my_vm_volume>\"},\"name\":\"rootdisk\"},{\"cloudInitNoCloud\":{...}]", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE <my_vm_volume> Bound ...", "virtctl guestfs <my-vm-volume> --uid 107", "virt-sysprep -a disk.img", "%WINDIR%\\System32\\Sysprep\\sysprep.exe /generalize /shutdown /oobe /mode:vm", "virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3", "oc get dvs", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-9-minimal spec: dataVolumeTemplates: - metadata: name: rhel-9-minimal-volume spec: sourceRef: kind: DataSource name: rhel9 1 namespace: openshift-virtualization-os-images 2 storage: {} instancetype: name: u1.medium 3 preference: name: rhel.9 4 runStrategy: Always template: spec: domain: devices: {} volumes: - dataVolume: name: rhel-9-minimal-volume name: rootdisk", "oc create -f <vm_manifest_file>.yaml", "virtctl start <vm_name> -n <namespace>", "cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF", "podman build -t <registry>/<container_disk_name>:latest .", "podman push <registry>/<container_disk_name>:latest", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - \"private-registry-example-1:5000\" - \"private-registry-example-2:5000\"", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null name: vm-rhel-datavolume 1 labels: kubevirt.io/vm: vm-rhel-datavolume spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: rhel-dv 2 spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 10Gi 3 instancetype: name: u1.small 4 preference: inferFromVolume: datavolumedisk1 runStrategy: Always template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-rhel-datavolume spec: domain: devices: {} resources: {} terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: rhel-dv name: datavolumedisk1 status: {}", "oc create -f vm-rhel-datavolume.yaml", "oc get pods", "oc describe dv rhel-dv 1", "virtctl console vm-rhel-datavolume", "apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/cloneFallbackReason: The volume modes of source and target are incompatible cdi.kubevirt.io/clonePhase: Succeeded cdi.kubevirt.io/cloneType: copy", "NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE test-ns 0s Warning IncompatibleVolumeModes persistentvolumeclaim/test-target The volume modes of source and target are incompatible", "kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com", "kind: StorageClass apiVersion: storage.k8s.io/v1 provisioner: openshift-storage.rbd.csi.ceph.com", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: namespace: \"<source_namespace>\" 2 name: \"<my_vm_disk>\" 3 storage: {}", "oc create -f <datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: runStrategy: Halted template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: <source_namespace> 2 name: \"<source_pvc>\" 3", "oc create -f <vm-clone-datavolumetemplate>.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/virtualization/advanced-vm-creation
24.2. Managing Certificates Issued by External CAs
24.2. Managing Certificates Issued by External CAs 24.2.1. Command Line: Adding and Removing Certificates Issued by External CAs To add a certificate to a user, host, or service: ipa user-add-cert ipa host-add-cert ipa service-add-cert To remove a certificate from a user, host, or service: ipa user-remove-cert ipa host-remove-cert ipa service-remove-cert A certificate issued by an external CA is not revoked after you remove it from IdM. This is because the certificate does not exist in the IdM CA database. You can only revoke these certificates manually from the external CA side. The commands require you to specify the following information: the name of the user, host, or service the Base64-encoded DER certificate To run the commands interactively, execute them without adding any options. To provide the required information directly with the command, use command-line arguments and options: Note Instead of copying and pasting the certificate contents into the command line, you can convert the certificate to the DER format and then re-encode it to base64. For example, to add the user_cert.pem certificate to user : 24.2.2. Web UI: Adding and Removing Certificates Issued by External CAs To add a certificate to a user, host, or service: Open the Identity tab, and select the Users , Hosts , or Services subtab. Click on the name of the user, host, or service to open its configuration page. Click Add , to the Certificates entry. Figure 24.4. Adding a Certificate to a User Account Paste the certificate in Base64 or PEM encoded format into the text field, and click Add . Click Save to store the changes. To remove a certificate from a user, host, or service: Open the Identity tab, and select the Users , Hosts , or Services subtab. Click on the name of the user, host, or service to open its configuration page. Click the Actions to the certificate to delete, and select Delete . Click Save to store the changes.
[ "ipa user-add-cert user --certificate= MIQTPrajQAwg", "ipa user-add-cert user --certificate=\"USD(openssl x509 -outform der -in user_cert.pem | base64 -w 0)\"" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/certificates-external-cas