title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 11. Enhancing Virtualization with the QEMU Guest Agent and SPICE Agent
Chapter 11. Enhancing Virtualization with the QEMU Guest Agent and SPICE Agent Agents in Red Hat Enterprise Linux such as the QEMU guest agent and the SPICE agent can be deployed to help the virtualization tools run more optimally on your system. These agents are described in this chapter. Note To further optimize and tune host and guest performance, see the Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide . 11.1. QEMU Guest Agent The QEMU guest agent runs inside the guest and allows the host machine to issue commands to the guest operating system using libvirt, helping with functions such as freezing and thawing filesystems. The guest operating system then responds to those commands asynchronously. The QEMU guest agent package, qemu-guest-agent , is installed by default in Red Hat Enterprise Linux 7. This section covers the libvirt commands and options available to the guest agent. Important Note that it is only safe to rely on the QEMU guest agent when run by trusted guests. An untrusted guest may maliciously ignore or abuse the guest agent protocol, and although built-in safeguards exist to prevent a denial of service attack on the host, the host requires guest co-operation for operations to run as expected. Note that QEMU guest agent can be used to enable and disable virtual CPUs (vCPUs) while the guest is running, thus adjusting the number of vCPUs without using the hot plug and hot unplug features. For more information, see Section 20.36.6, "Configuring Virtual CPU Count" . 11.1.1. Setting up Communication between the QEMU Guest Agent and Host The host machine communicates with the QEMU guest agent through a VirtIO serial connection between the host and guest machines. A VirtIO serial channel is connected to the host via a character device driver (typically a Unix socket), and the guest listens on this serial channel. Note The qemu-guest-agent does not detect if the host is listening to the VirtIO serial channel. However, as the current use for this channel is to listen for host-to-guest events, the probability of a guest virtual machine running into problems by writing to the channel with no listener is very low. Additionally, the qemu-guest-agent protocol includes synchronization markers that allow the host physical machine to force a guest virtual machine back into sync when issuing a command, and libvirt already uses these markers, so that guest virtual machines are able to safely discard any earlier pending undelivered responses. 11.1.1.1. Configuring the QEMU Guest Agent on a Linux Guest The QEMU guest agent can be configured on a running or shut down virtual machine. If configured on a running guest, the guest will start using the guest agent immediately. If the guest is shut down, the QEMU guest agent will be enabled at boot. Either virsh or virt-manager can be used to configure communication between the guest and the QEMU guest agent. The following instructions describe how to configure the QEMU guest agent on a Linux guest. Procedure 11.1. Setting up communication between guest agent and host with virsh on a shut down Linux guest Shut down the virtual machine Ensure the virtual machine (named rhel7 in this example) is shut down before configuring the QEMU guest agent: Add the QEMU guest agent channel to the guest XML configuration Edit the guest's XML file to add the QEMU guest agent details: Add the following to the guest's XML file and save the changes: <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel> Start the virtual machine Install the QEMU guest agent on the guest Install the QEMU guest agent if not yet installed in the guest virtual machine: Start the QEMU guest agent in the guest Start the QEMU guest agent service in the guest: Alternatively, the QEMU guest agent can be configured on a running guest with the following steps: Procedure 11.2. Setting up communication between guest agent and host on a running Linux guest Create an XML file for the QEMU guest agent # cat agent.xml <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel> Attach the QEMU guest agent to the virtual machine Attach the QEMU guest agent to the running virtual machine (named rhel7 in this example) with this command: Install the QEMU guest agent on the guest Install the QEMU guest agent if not yet installed in the guest virtual machine: Start the QEMU guest agent in the guest Start the QEMU guest agent service in the guest: Procedure 11.3. Setting up communication between the QEMU guest agent and host with virt-manager Shut down the virtual machine Ensure the virtual machine is shut down before configuring the QEMU guest agent. To shut down the virtual machine, select it from the list of virtual machines in Virtual Machine Manager , then click the light switch icon from the menu bar. Add the QEMU guest agent channel to the guest Open the virtual machine's hardware details by clicking the lightbulb icon at the top of the guest window. Click the Add Hardware button to open the Add New Virtual Hardware window, and select Channel . Select the QEMU guest agent from the Name drop-down list and click Finish : Figure 11.1. Selecting the QEMU guest agent channel device Start the virtual machine To start the virtual machine, select it from the list of virtual machines in Virtual Machine Manager , then click on the menu bar. Install the QEMU guest agent on the guest Open the guest with virt-manager and install the QEMU guest agent if not yet installed in the guest virtual machine: Start the QEMU guest agent in the guest Start the QEMU guest agent service in the guest: The QEMU guest agent is now configured on the rhel7 virtual machine.
[ "virsh shutdown rhel7", "virsh edit rhel7", "<channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>", "virsh start rhel7", "yum install qemu-guest-agent", "systemctl start qemu-guest-agent", "cat agent.xml <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>", "virsh attach-device rhel7 agent.xml", "yum install qemu-guest-agent", "systemctl start qemu-guest-agent", "yum install qemu-guest-agent", "systemctl start qemu-guest-agent" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-qemu_guest_agent
GitOps
GitOps OpenShift Container Platform 4.17 A declarative way to implement continuous deployment for cloud native applications. Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/gitops/index
Chapter 4. Installing a cluster with RHEL KVM on IBM Z and IBM(R) LinuxONE
Chapter 4. Installing a cluster with RHEL KVM on IBM Z and IBM(R) LinuxONE In OpenShift Container Platform version 4.12, you can install a cluster on IBM Z or IBM(R) LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z, all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. You provisioned a RHEL Kernel Virtual Machine (KVM) system that is hosted on the logical partition (LPAR) and based on RHEL 8.4 or later. See Red Hat Enterprise Linux 8 and 9 Life Cycle . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. One or more KVM host machines based on RHEL 8.4 or later. Each RHEL KVM host machine must have libvirt installed and running. The virtual machines are provisioned under each RHEL KVM host machine. 4.3.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: Table 4.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different RHEL instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. See Red Hat Enterprise Linux technology capabilities and limits . 4.3.2. Network connectivity requirements The OpenShift Container Platform installer creates the Ignition files, which are necessary for all the Red Hat Enterprise Linux CoreOS (RHCOS) virtual machines. The automated installation of OpenShift Container Platform is performed by the bootstrap machine. It starts the installation of OpenShift Container Platform on each node, starts the Kubernetes cluster, and then finishes. During this bootstrap, the virtual machine must have an established network connection either through a Dynamic Host Configuration Protocol (DHCP) server or static IP address. 4.3.3. IBM Z network connectivity requirements To install on IBM Z under RHEL KVM, you need: A RHEL KVM host configured with an OSA or RoCE network adapter. Either a RHEL KVM host that is configured to use bridged networking in libvirt or MacVTap to connect the network to the guests. See Types of virtual network connections . 4.3.4. Host machine resource requirements The RHEL KVM host in your environment must meet the following requirements to host the virtual machines that you plan for the OpenShift Container Platform environment. See Getting started with virtualization . You can install OpenShift Container Platform version 4.12 on the following IBM hardware: IBM z16 (all models), IBM z15 (all models), IBM z14 (all models), IBM z13, and IBM z13s IBM(R) LinuxONE Emperor 4, IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II, IBM(R) LinuxONE Emperor, and IBM(R) LinuxONE Rockhopper Note Support for RHCOS functionality for IBM z13 all models, IBM(R) LinuxONE Emperor, and IBM(R) LinuxONE Rockhopper is deprecated. These hardware models remain fully supported in OpenShift Container Platform 4.12. However, Red Hat recommends that you use later hardware models. 4.3.5. Minimum IBM Z system environment Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One LPAR running on RHEL 8.4 or later with KVM, which is managed by libvirt On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine 4.3.6. Minimum resource requirements Each cluster virtual machine must meet the following minimum requirements: Virtual Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 4.3.7. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Operating system requirements For high availability, two or three LPARs running on RHEL 8.4 or later with KVM, which are managed by libvirt. On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, distributed across the RHEL KVM host machines. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the RHEL KVM host machines. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using cpu_shares . Do the same for infrastructure nodes, if they exist. See schedinfo in IBM Documentation. 4.3.8. Preferred resource requirements The preferred requirements for each cluster virtual machine are: Virtual Machine Operating System vCPU Virtual RAM Storage Bootstrap RHCOS 4 16 GB 120 GB Control plane RHCOS 8 16 GB 120 GB Compute RHCOS 6 8 GB 120 GB 4.3.9. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources Recommended host practices for IBM Z & IBM(R) LinuxONE environments 4.3.10. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 4.3.10.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 4.3.10.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Note The RHEL KVM host must be configured to use bridged networking in libvirt or MacVTap to connect the network to the virtual machines. The virtual machines must have access to the network, which is attached to the RHEL KVM host. Virtual Networks, for example network address translation (NAT), within KVM are not a supported configuration. Table 4.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 4.3. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 4.4. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 4.3.11. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 4.5. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 4.3.11.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 4.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 4.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 4.3.12. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 4.6. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 4.7. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 4.3.12.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 4.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 4.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Choose to perform either a fast track installation of Red Hat Enterprise Linux CoreOS (RHCOS) or a full installation of Red Hat Enterprise Linux CoreOS (RHCOS). For the full installation, you must set up an HTTP or HTTPS server to provide Ignition files and install images to the cluster nodes. For the fast track installation an HTTP or HTTPS server is not required, however, a DHCP server is required. See sections "Fast-track installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines" and "Full installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines". Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 4.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 4.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 4.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 4.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z 4.9.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 15 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 4.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 4.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 4.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 4.8. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 4.9. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 4.10. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 4.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 4.12. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 4.13. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. Note In OpenShift Container Platform {version}, egress IP is only assigned to the primary interface. Consequentially, setting routingViaHost to true will not work for egress IP in OpenShift Container Platform {version}. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 4.14. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 4.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 4.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) as Red Hat Enterprise Linux (RHEL) guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. You can perform a fast-track installation of RHCOS that uses a prepackaged QEMU copy-on-write (QCOW2) disk image. Alternatively, you can perform a full installation on a new QCOW2 disk image. To add further security to your system, you can optionally install RHCOS using IBM Secure Execution before proceeding to the fast-track installation. 4.12.1. Installing RHCOS using IBM Secure Execution Before you install RHCOS using IBM Secure Execution, you must prepare the underlying infrastructure. Important Installing RHCOS using IBM Secure Execution is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites IBM z15 or later, or IBM(R) LinuxONE III or later. Red Hat Enterprise Linux (RHEL) 8 or later. You have a bootstrap Ignition file. The file is not protected, enabling others to view and edit it. You have verified that the boot image has not been altered after installation. You must run all your nodes as IBM Secure Execution guests. Procedure Prepare your RHEL KVM host to support IBM Secure Execution. By default, KVM hosts do not support guests in IBM Secure Execution mode. To support guests in IBM Secure Execution mode, KVM hosts must boot in LPAR mode with the kernel parameter specification prot_virt=1 . To enable prot_virt=1 on RHEL 8, follow these steps: Navigate to /boot/loader/entries/ to modify your bootloader configuration file *.conf . Add the kernel command line parameter prot_virt=1 . Run the zipl command and reboot your system. KVM hosts that successfully start with support for IBM Secure Execution for Linux issue the following kernel message: prot_virt: Reserving <amount>MB as ultravisor base storage. To verify that the KVM host now supports IBM Secure Execution, run the following command: # cat /sys/firmware/uv/prot_virt_host Example output 1 The value of this attribute is 1 for Linux instances that detect their environment as consistent with that of a secure host. For other instances, the value is 0. Add your host keys to the KVM guest via Ignition. During the first boot, RHCOS looks for your host keys to re-encrypt itself with them. RHCOS searches for files starting with ibm-z-hostkey- in the /etc/se-hostkeys directory. All host keys, for each machine the cluster is running on, must be loaded into the directory by the administrator. After first boot, you cannot run the VM on any other machines. Note You need to prepare your Ignition file on a safe system. For example, another IBM Secure Execution guest. For example: { "ignition": { "version": "3.0.0" }, "storage": { "files": [ { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 }, { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 } ] } } ``` Note You can add as many host keys as required if you want your node to be able to run on multiple IBM Z machines. To generate the Base64 encoded string, run the following command: base64 <your-hostkey>.crt Compared to guests not running IBM Secure Execution, the first boot of the machine is longer because the entire image is encrypted with a randomly generated LUKS passphrase before the Ignition phase. Follow the fast-track installation procedure to install nodes using the IBM Secure Exection QCOW image. Additional resources Introducing IBM Secure Execution for Linux Linux as an IBM Secure Execution host or guest 4.12.2. Fast-track installation by using a prepackaged QCOW2 disk image Complete the following steps to create the machines in a fast-track installation of Red Hat Enterprise Linux CoreOS (RHCOS), importing a prepackaged Red Hat Enterprise Linux CoreOS (RHCOS) QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.4 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. A DHCP server that provides IP addresses. Procedure Obtain the RHEL QEMU copy-on-write (QCOW2) disk image file from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. Download the QCOW2 disk image and Ignition files to a common directory on the RHEL KVM host. For example: /var/lib/libvirt/images Note The Ignition files are generated by the OpenShift Container Platform installer. Create a new disk image with the QCOW2 disk image backing file for each KVM guest node. USD qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size} Create the new KVM guest nodes using the Ignition file and the new disk image. USD virt-install --noautoconsole \ --connect qemu:///system \ --name {vn_name} \ --memory {memory} \ --vcpus {vcpus} \ --disk {disk} \ --import \ --network network={network},mac={mac} \ --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 4.12.3. Full installation on a new QCOW2 disk image Complete the following steps to create the machines in a full installation on a new QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.4 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. An HTTP or HTTPS server is set up. Procedure Obtain the RHEL kernel, initramfs, and rootfs files from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Move the downloaded RHEL live kernel, initramfs, and rootfs as well as the Ignition files to an HTTP or HTTPS server before you launch virt-install . Note The Ignition files are generated by the OpenShift Container Platform installer. Create the new KVM guest nodes using the RHEL kernel, initramfs, and Ignition files, the new disk image, and adjusted parm line arguments. For --location , specify the location of the kernel/initrd on the HTTP or HTTPS server. For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. USD virt-install \ --connect qemu:///system \ --name {vn_name} \ --vcpus {vcpus} \ --memory {memory_mb} \ --disk {vn_name}.qcow2,size={image_size| default(10,true)} \ --network network={virt_network_parm} \ --boot hd \ --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} \ --extra-args "rd.neednet=1 coreos.inst=yes coreos.inst.install_dev=vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vn_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}" \ --noautoconsole \ --wait 4.12.4. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 4.12.4.1. Networking options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 4.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 4.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Configure the Operators that are not available. 4.16.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 4.16.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 4.16.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. 4.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service How to generate SOSREPORT within OpenShift4 nodes without SSH . 4.19. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "prot_virt: Reserving <amount>MB as ultravisor base storage.", "cat /sys/firmware/uv/prot_virt_host", "1", "{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```", "base64 <your-hostkey>.crt", "qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}", "virt-install --noautoconsole --connect qemu:///system --name {vn_name} --memory {memory} --vcpus {vcpus} --disk {disk} --import --network network={network},mac={mac} --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional", "virt-install --connect qemu:///system --name {vn_name} --vcpus {vcpus} --memory {memory_mb} --disk {vn_name}.qcow2,size={image_size| default(10,true)} --network network={virt_network_parm} --boot hd --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} --extra-args \"rd.neednet=1 coreos.inst=yes coreos.inst.install_dev=vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vn_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}\" --noautoconsole --wait", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_ibm_z_and_ibm_linuxone/installing-ibm-z-kvm
Chapter 3. Refining vulnerability-service results
Chapter 3. Refining vulnerability-service results Whether reporting results to stakeholders or prioritizing systems for remediation, the vulnerability service enables many ways to refine the views of your data, helping you and others focus on your most critical systems, workloads, or issues. The following sections describe the organization of your data and the sorting, filtering, and contextual features you can use to refine and enrich your results. 3.1. CVE-list and systems-list filters Filtering narrows the visible list of CVEs and associated systems, helping you focus on specific issues. Apply filters to the CVEs list to focus on CVEs by criticality or business risk, for example. After selecting an individual CVE, apply filters to the resulting list of affected systems to focus on those of a specific RHEL major or minor version, for example. Filters are activated by selecting a primary filter from the drop down list of filters on the left, and then selecting a secondary subfilter from the filter options drop down list on the right. Selected filters are visible below the Filters menu and can be deactivated by clicking the X to each one. CVEs list filters The following primary filters are accessible from the CVEs page. Select the primary filter, then define a parameter in the subfilter: CVE. Search ID or description. Security rules. Show only CVEs with the "Security rule" label. Known exploit. Show only CVEs with the "Known exploit" label. Severity. Select one or more values: Critical, Important, Moderate, Low, or Unknown. CVSS base score. Select one or more ranges: All, 0.0-3.9, 4.0-7.9, 8.0-10.0, N/A (not applicable) Business risk. Select one or more values: High, Medium, Low, Not defined. Systems exposed . Select to only show CVEs with systems currently affected, or with no systems affected. Publish date. Select from All, Last 7 days, Last 30 days, Last 90 days, Last year, or More than 1 year ago. Status. Select one or more values: Not reviewed, In review, On-hold, Scheduled for patch, Resolved, No action - risk accepted, Resolved via mitigation. Systems list filters The following primary filters are accessible from the top of the list of systems on the CVE details page: Name. Find a specific CVE by entering the CVE ID. Security rules. If the CVE has a security rule associated with it, filter by other systems vulnerable to the same security rule, or show systems not affected by the security rule. Status. Show systems in specific status or workflow categories. Advisory. Show systems to which a Red Hat advisory applies for this CVE. Operating system. Show systems running specific RHEL (minor) versions. Remediation. Show systems included in an Ansible Playbook, a manual remediation, or that are not included in a current remediation plan. 3.1.1. Filtering security-rule CVEs Security rules, especially high-severity security rules, pose the greatest potential threat to your infrastructure and should be considered the highest priority for identification and remediation. Use the following procedure to view only high-severity security-rule CVEs in the CVEs list and identify affected systems. Note Not all systems exposed to a CVE are also exposed to a security rule associated with that CVE. Even though you may be running a vulnerable version of software, other environmental conditions may mitigate the threat; for example, a specific port is closed or SELinux is enabled. Procedure Navigate to Security > Vulnerability > CVEs in Red Hat Insights for Red Hat Enterprise Linux. Click the filters dropdown list in the toolbar. Apply the Security rules filter. Apply the Has security rule subfilter. Scroll down to view security-rule CVEs. CVEs with security rules display the security-rule label located immediately below the CVE ID. 3.1.2. Remediating vulnerabilities with security rules on RHEL Systems CVEs with security rules are CVEs prioritized by Red Hat because they focus on issues that have elevated risk to your systems. Remediating these issues helps support a security posture that prioritizes the most important issues for your organization. Using the vulnerability service and the remediations service, you can prioritize and remediate some of the most important threats to your systems by: Focusing on CVEs that have security rules. For more information about security rules, see Security rules , and Filtering lists of systems exposed to security rules . Remediating CVEs. For more information about remediating CVEs, see the Red Hat Insights Remediations Guide . 3.1.3. Filtering known-exploit CVEs CVEs with the "Known exploit" label are determined by Red Hat to have exploits that exist in the wild; either the code exists publicly to exploit the CVE, or an exploit is known publicly to have already happened. For these reasons, known-exploit CVEs should be prioritized for identification and remediation. Important Red Hat does not determine whether any of your registered systems have been exploited. We are simply identifying CVEs that may pose an extraordinary risk. Use the following steps to filter known-exploit CVEs in the CVEs list: Procedure Navigate to Security > Vulnerability > CVEs in Red Hat Insights for Red Hat Enterprise Linux. Click the filters drop-down list in the toolbar. Apply the Known exploit filter. Apply the Has a known exploit subfilter. Scroll down to view the list of known-exploit CVEs. 3.1.4. Filtering CVEs without associated advisories Some CVEs do not have associated advisories, also called errata . This might happen for any of the following reasons: No fix is available for the CVE Product Security analysis determines that the CVE affects your environment, but has no errata available for your environment (although the same CVE can have errata in other environments) Your system is no longer under support Important CVE information is currently available for RHEL 6, 7, 8, and 9. No information is available for RHEL 5 systems. Being able to identify the CVEs without advisories enables you to take measures to protect your organization from exposures associated with those vulnerabilities, so that you can take the necessary steps to address the issues. If your version of RHEL does not have a fix available and is listed as "Will not fix," consider the following criteria: The impact of the vulnerability (severity) The life cycle phase of your version of RHEL If you decide that a fix is necessary for a CVE without an associated advisory, the following options are available: Accept the risk Upgrade to a supported product version that includes a fix for this vulnerability, if available (recommended) Apply a mitigation (if one exists) Additional resources For more information about CVEs, see Common Vulnerabilities and Exposures For more information about the severity ratings for vulnerabilities, see Understanding severity ratings . For more information about product life cycles, see Life cycle and update policies . To open a support case in the Customer Portal, see Customer support . 3.1.4.1. Enabling CVEs without advisories Enabling CVEs without advisories allows you to access systems affected by CVEs without advisories in Insights. This feature is enabled by default, but CVEs without advisories are hidden by default in the main view. This means that you must use filters to display and view CVEs without advisories. Note Red Hat's policy requires Insights for Red Hat Enterprise Linux to display all high priority, critical, and important CVEs, regardless of whether those CVEs have associated advisories. Prerequisites Vulnerability administrator access to your environment in Red Hat Insights Procedure From the Red Hat Insights for RHEL dashboard, navigate to Security > Vulnerability > CVEs . Click the More options icon (...) and select Show CVEs without Advisories . The list of advisories includes CVEs without advisories. 3.1.4.2. Disabling CVEs without advisories To disable CVEs without advisories feature, deselect the Show CVEs without Advisories option. The CVEs without advisories option is enabled by default, but the default view hides CVEs without advisories. Note Red Hat's policy requires Insights for Red Hat Enterprise Linux to display all high priority, critical, and important CVEs, regardless of whether those CVEs have associated advisories. Prerequisites Vulnerability administrator access to your environment in Red Hat Insights The list of advisories includes CVEs without advisories Procedure From the Red Hat Insights for RHEL dashboard, navigate to Security > Vulnerability > CVEs . Click the More options icon (...) and select Hide CVEs without Advisories . 3.1.4.3. Viewing CVEs without advisories The Show CVEs without Advisories option enables or disables CVEs without advisories. To view CVEs without advisories, the Show CVEs without Advisories option must be enabled. Prerequisites The Organization Administrator has enabled the CVEs without advisories option. Procedure From the Red Hat Insights for RHEL dashboard, navigate to Security > Vulnerability > CVEs . From the filter drop-down, select Advisory . From the Filter by Advisory drop-down, select Not Available . The list of advisories shows all CVEs without advisories. 3.1.4.4. Identifying systems affected by a CVE without advisories The CVE details page displays the list of all systems affected by the selected CVE. You can filter the list of systems to display the systems affected by the CVE where an advisory is not present. Prerequisites The Organization Administrator has enabled the CVEs without advisories option. Procedure Identify a CVE without advisories for which you want to see systems that it affects. For more information about identifying CVEs without advisories, refer to Identifying systems with CVEs without Advisories. Select the identified CVE to navigate to the CVE details page. The CVE details page for that CVE displays. The page lists all the systems affected by that CVE. If you apply the Filter by Advisory filter and Not Available option when you select a CVE, these filters persist to the CVE details page. Otherwise, when you navigate to the CVE details page, select Advisory from the filters at the top of the page, and then Select Filter by Advisory , and click the Not Available checkbox. The list of systems updates to show only systems affected by that CVE without advisories. The Advisory column shows Not Available for each system in the list. Optional: To view details for the system, select the name of the system you want to view. The details page for the system displays. 3.1.4.5. Viewing CVEs without advisories in system details The system details page displays the list of all CVEs that affect the selected system. You can filter the list of CVEs to display the CVEs without advisories. Prerequisites The Organization Administrator has enabled the CVEs without advisories option. Procedure From the Red Hat Insights for RHEL dashboard, navigate to Security > Vulnerability > Systems . The Vulnerability systems page displays. Select a system ID from the list. The system details page for that system displays. The page lists all the CVEs that affect the selected system. Select Advisory from the filters at the top of the page. Select Filter by Advisory , and select the Not Available checkbox. The list of CVEs updates to show only CVEs without advisories. The Advisory column shows Not Available for each CVE in the list. Optional: To view details for the CVE, select the CVE ID for the CVE you want to view. The details page for the CVE displays. 3.2. Filtering lists of systems exposed to security rules After filtering the list of CVEs to view only your most critical potential threats, select an individual CVE to view the list of exposed systems and apply a filter to the list. Procedure After selecting a security-rule CVE, scroll down to the Exposed systems list. Not every system in the list has the security rule conditions present for the CVE to be a security rule. Apply the following filter to see only the systems with security rule conditions present. Select the Security rules filter from the primary filter dropdown list. Check the Has security rule box in the secondary subfilter dropdown list. View the systems with exposure to that CVE that also have the conditions present for the security rules. 3.3. Insights for RHEL group filters The ability to filter vulnerability service results by groups of systems or workloads enables users to view only those systems tagged as belonging to a specific group. These can be systems running SAP workloads (or by SAP ID), by Satellite host groups, or by custom tags added to the Insights client configuration file. Group filtering can be set globally in Insights for Red Hat Enterprise Linux using the Filter results box located at the top of the page throughout the Insights for Red Hat Enterprise Linux application. Group selection persists when changing from service to service and page to page. However, the functionality varies within the different Insights for Red Hat Enterprise Linux services. Group filtering works in the vulnerability Dashboard and vulnerability service CVEs and Systems lists. Learn more about group tags and configuring custom tags in Tags and system groups section of this document. 3.3.1. Filtering Dashboard, CVEs, and Systems lists by group Use the following procedure to filter vulnerability service CVE and Systems lists by group. Procedure Navigate to Red Hat Hybrid Cloud Console and log in. Open the Red Hat Insights for Red Hat Enterprise Linux application. Click the down arrow on the Filter results box located at the top of any page in the Insights application. Select a group by which to filter your systems. Search or scroll to view available tags. To browse the full list of available tags, scroll to the bottom of the list and click View more . Optionally, Select SAP workloads. Select systems by specific SAP IDs. Select Satellite host collections. Select systems identified by custom group tags. To learn more about creating custom tags, see section, Custom system tagging , in this document. Navigate to the service and view only systems or CVEs that belong to your selected group or groups. 3.4. Defining a business risk for a CVE The vulnerability service allows you to define the business risk of a CVE with the following options: High, Medium, Low, or Not Defined (default). While the list of CVEs shows the severity of each CVE, assigning a business risk lets you rank CVEs based on the impact they could have on your organization. This can give you more control in managing your risk efficiently in a large environment, and enable you to make better operational decisions. By default, the business risk field for a specific CVE is set to Not Defined . After you set the business risk, it is visible in the Security > Vulnerability > CVEs list, in the CVE row. Business risk is also visible on the details card for each CVE, which shows more information and lists affected systems. 3.4.1. Setting a business risk for a single CVE Complete the following steps to set the business risk for a single CVE: Note The business risk for that CVE will be the same on all systems impacted by it. Navigate to the Security > Vulnerability > CVEs page and log in if necessary. Identify a CVE for which to set a business risk. Click the more-actions icon (three vertical dots) on the right end of the CVE row and click Edit business risk . Set a business risk value to the appropriate level and, optionally, add a justification for your risk assessment. Click Save . 3.4.2. Setting a business risk for multiple CVEs Complete the following steps to set the same business risk on multiple CVEs that you select: Navigate to Security > Vulnerability > CVEs and log in if necessary. Check the boxes for the CVEs for which you want to set a business risk. Perform the following steps to set a business risk: Click the more-actions icon (three vertical dots) to the right of the Filters dropdown menu in the toolbar and click Edit business risk . Set an appropriate business risk value and, optionally, add a justification for your risk assessment. Click Save . 3.5. Excluding systems from vulnerability service analysis The vulnerability service allows you to exclude specific systems from vulnerability analysis. This can save you the time and attention required to review and re-review issues on systems that are not relevant to your organization's goals. As an example, if you have the following category of servers: QA, Dev, and Production, you may not care to review the vulnerabilities for your QA servers and therefore want to exclude these systems from the analysis performed by the vulnerability service. When you exclude systems from vulnerability analysis, the Insights client still runs per schedule on the system, but the results for the system are not visible in the vulnerability service. The continued operations of the client ensure that other Red Hat Insights for Red Hat Enterprise Linux services can still upload the data they need. It also means that you can still view results for those systems using filtering. Complete the following steps to exclude selected RHEL systems from vulnerability service analysis: Procedure Navigate to the Security > Vulnerability > Systems tab and log in if necessary. Check the box for each system you want to exclude from vulnerability analysis. Click the more-actions icon in the toolbar, at the top of the list of systems, and select Exclude systems from vulnerability analysis . Optionally, you can exclude a single system by clicking the more-actions icon in the system row and selecting Exclude system from vulnerability analysis . 3.6. Showing previously excluded systems Complete the following steps to show a previously excluded system: Procedure Navigate to the Security > Vulnerability > Systems tab and log in if necessary. Click the more-actions icon in the toolbar, at the top of the list of systems, and select Show systems excluded from analysis . See systems excluded from vulnerability analysis. This can be verified by the value of Excluded in the Applicable CVEs column. 3.7. Resuming vulnerability analysis for a system Complete the following steps to resume vulnerability analysis for a system: Procedure Navigate to the Security > Vulnerability > Systems tab and log in if necessary. Click the more-actions icon in the toolbar, at the top of the list of systems, and select Show systems excluded from analysis . In the list of results, check the box for each system for which you want to resume vulnerability analysis. Click the more-actions icon again and select Resume analysis for system . 3.8. CVE status Another method of managing CVEs impacting your systems is by setting a status for CVEs. The vulnerability service enables the following ways of setting a status for a CVE: Set a status for a CVE for all systems. Set a status for a specific CVE + system pair . Status values are preset and include the following options: Not reviewed (default) In-review On-hold Scheduled for patch Resolved No action - risk accepted Resolved via mitigation Setting a status for a CVE can facilitate better triaging through its life-cycle, from becoming aware of it to remediating it. Defining a status allows your organization to keep better tabs on where the most critical CVEs are in their life-cycle and where you should focus your efforts to address the most critical issues per your business need. The status for a CVE is visible in all CVE tables in the vulnerability service and in individual CVE views. 3.8.1. Setting a status for a CVE on all affected systems Complete the following steps to set a status for a CVE and have that status apply to that CVE on all of the systems it impacts: Procedure Navigate to the Security > Vulnerability > CVEs tab and log in if necessary. Click the more-actions icon located on the right end of the CVE row and select Edit status . Select the appropriate status and, optionally, enter a rationale for your decision in the Justification text box. Check Do not overwrite individual system status if there are statuses set for this CVE on individual systems and that you want to preserve. Otherwise, leave the box unchecked to apply this status to all of the systems it is impacting. Click Save . 3.8.2. Setting a status for a CVE and system pair Complete the following steps to set a status on a CVE and system pair: Procedure Navigate to the Security > Vulnerability > Systems tab and log in if necessary. Identify the system and click the system name to open it. Select a CVE from the list and check the box to the CVE ID. Click the more-options icon in the toolbar and select Edit status . In the popup card, take the following actions: Set a status for the CVE and system pair. Note If the box to Use overall CVE status is checked, you cannot set a status for the pair. Optionally, enter a justification for your status determination. Click Save . Locate the CVE in the list and verify the status is set. 3.9. Using the Search box The search function in the vulnerability service works in the context of the page you are viewing. CVEs page. The search box is located in the toolbar at the top of the CVEs list. With the CVE filter set, search CVE IDs and descriptions. Systems page. The search box is located in the toolbar at the top of the list. Search for system name or UUID. 3.10. Sorting CVE list data The sorting functions in the vulnerability service differ based on the context of the page you are viewing. Procedure In the CVEs tab , you can apply sorting to the following columns: CVE ID Publish date Severity CVSS base score Systems exposed Business risk Status In the Systems tab , the following column can be sorted: Name Applicable CVEs Last seen After selecting a system in the Systems tab, the system-specific list of CVEs allows the following sorting options: CVE ID Publish date Impact CVSS base score Business risk Status
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_monitoring_security_vulnerabilities_on_rhel_systems/vuln-refining-data_vuln-overview
Appendix A. Preventing kernel modules from loading automatically
Appendix A. Preventing kernel modules from loading automatically You can prevent a kernel module from being loaded automatically, whether the module is loaded directly, loaded as a dependency from another module, or during the boot process. Procedure The module name must be added to a configuration file for the modprobe utility. This file must reside in the configuration directory /etc/modprobe.d . For more information on this configuration directory, see the man page modprobe.d . Ensure the module is not configured to get loaded in any of the following: /etc/modprobe.conf /etc/modprobe.d/* /etc/rc.modules /etc/sysconfig/modules/* # modprobe --showconfig <_configuration_file_name_> If the module appears in the output, ensure it is ignored and not loaded: # modprobe --ignore-install <_module_name_> Unload the module from the running system, if it is loaded: # modprobe -r <_module_name_> Prevent the module from being loaded directly by adding the blacklist line to a configuration file specific to the system - for example /etc/modprobe.d/local-dontload.conf : # echo "blacklist <_module_name_> >> /etc/modprobe.d/local-dontload.conf Note This step does not prevent a module from loading if it is a required or an optional dependency of another module. Prevent optional modules from being loading on demand: # echo "install <_module_name_>/bin/false" >> /etc/modprobe.d/local-dontload.conf Important If the excluded module is required for other hardware, excluding it might cause unexpected side effects. Make a backup copy of your initramfs : # cp /boot/initramfs-USD(uname -r).img /boot/initramfs-USD(uname -r).img.USD(date +%m-%d-%H%M%S).bak If the kernel module is part of the initramfs , rebuild your initial ramdisk image, omitting the module: # dracut --omit-drivers <_module_name_> -f Get the current kernel command line parameters: # grub2-editenv - list | grep kernelopts Append <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_> to the generated output: # grub2-editenv - set kernelopts="<> <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>" For example: # grub2-editenv - set kernelopts="root=/dev/mapper/rhel_example-root ro crashkernel=auto resume=/dev/mapper/rhel_example-swap rd.lvm.lv=rhel_example/root rd.lvm.lv=rhel_example/swap <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>" Make a backup copy of the kdump initramfs : # cp /boot/initramfs-USD(uname -r)kdump.img /boot/initramfs-USD(uname -r)kdump.img.USD(date +%m-%d-%H%M%S).bak Append rd.driver.blacklist=<_module_name_> to the KDUMP_COMMANDLINE_APPEND setting in /etc/sysconfig/kdump to omit it from the kdump initramfs : # sed -i '/^KDUMP_COMMANDLINE_APPEND=/s/"USD/ rd.driver.blacklist=module_name"/' /etc/sysconfig/kdump Restart the kdump service to pick up the changes to the kdump initrd : # kdumpctl restart Rebuild the kdump initial ramdisk image: # mkdumprd -f /boot/initramfs-USD(uname -r)kdump.img Reboot the system. A.1. Removing a module temporarily You can remove a module temporarily. Procedure Run modprobe to remove any currently-loaded module: # modprobe -r <module name> If the module cannot be unloaded, a process or another module might still be using the module. If so, terminate the process and run the modpole command written above another time to unload the module.
[ "modprobe --showconfig <_configuration_file_name_>", "modprobe --ignore-install <_module_name_>", "modprobe -r <_module_name_>", "echo \"blacklist <_module_name_> >> /etc/modprobe.d/local-dontload.conf", "echo \"install <_module_name_>/bin/false\" >> /etc/modprobe.d/local-dontload.conf", "cp /boot/initramfs-USD(uname -r).img /boot/initramfs-USD(uname -r).img.USD(date +%m-%d-%H%M%S).bak", "dracut --omit-drivers <_module_name_> -f", "grub2-editenv - list | grep kernelopts", "grub2-editenv - set kernelopts=\"<> <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>\"", "grub2-editenv - set kernelopts=\"root=/dev/mapper/rhel_example-root ro crashkernel=auto resume=/dev/mapper/rhel_example-swap rd.lvm.lv=rhel_example/root rd.lvm.lv=rhel_example/swap <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>\"", "cp /boot/initramfs-USD(uname -r)kdump.img /boot/initramfs-USD(uname -r)kdump.img.USD(date +%m-%d-%H%M%S).bak", "sed -i '/^KDUMP_COMMANDLINE_APPEND=/s/\"USD/ rd.driver.blacklist=module_name\"/' /etc/sysconfig/kdump", "kdumpctl restart", "mkdumprd -f /boot/initramfs-USD(uname -r)kdump.img", "modprobe -r <module name>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/proc-preventing_kernel_modules_from_loading_automatically_install_nodes_rhvh
Service Telemetry Framework Release Notes 1.5
Service Telemetry Framework Release Notes 1.5 Red Hat OpenStack Platform 17.1 Release details for Service Telemetry Framework 1.5 OpenStack Documentation Team Red Hat Customer Content Services [email protected] Abstract This document outlines the major features, enhancements, and known issues in this release of Service Telemetry Framework.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/service_telemetry_framework_release_notes_1.5/index
8.12. Verifying Network Configuration Teaming for Redundancy
8.12. Verifying Network Configuration Teaming for Redundancy Network redundancy is a process when devices are used for backup purposes to prevent or recover from a failure of a specific system. The following procedure describes how to verify the network configuration for teaming in redundancy: Procedure Ping the destination IP from the team interface. For example: View which interface is in active mode: enp1s0 is the active interface. Temporarily remove the network cable from the host. Note There is no method to properly test link failure events using software utilities. Tools that deactivate connections, such as ip or nmcli , show only the driver's ability to handle port configuration changes and not actual link failure events. Check if the backup interface is up: enp2s0 is now the active interface. Check if you can still ping the destination IP from the team interface:
[ "~]# ping -I team0 DSTADDR", "~]# teamdctl team0 state setup: runner: activebackup ports: enp1s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 enp2s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: enp1s0", "~]# teamdctl team0 state setup: runner: activebackup ports: enp1s0 link watches: link summary: down instance[link_watch_0]: name: ethtool link: down down count: 1 enp2s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: enp2s0", "~]# ping -I team0 DSTADDR" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-verifying_network_configuration_teaming_for_redundancy
B.96.4. RHSA-2011:0374 - Important: thunderbird security and bug fix update
B.96.4. RHSA-2011:0374 - Important: thunderbird security and bug fix update An updated thunderbird package that fixes one security issue and one bug is now available for Red Hat Enterprise Linux 4, 5, and 6. The Red Hat Security Response Team has rated this update as having important security impact. Mozilla Thunderbird is a standalone mail and newsgroup client. This erratum blacklists a small number of HTTPS certificates. (BZ# 689430 ) Bug Fix BZ# 683076 The RHSA-2011:0312 and RHSA-2011:0311 updates introduced a regression, preventing some Java content and plug-ins written in Java from loading. With this update, the Java content and plug-ins work as expected. All Thunderbird users should upgrade to this updated package, which resolves these issues. All running instances of Thunderbird must be restarted for the update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2011-0374
1.3. Configure Authentication in a Security Domain
1.3. Configure Authentication in a Security Domain To configure authentication settings for a security domain, log into the management console and follow this procedure. Procedure 1.1. Setup Authentication Settings for a Security Domain Open the security domain's detailed view. Click the Configuration label at the top of the management console. Select the profile to modify from the Profile selection box at the top left of the Profile view. Note You should only select a profile if you are running Red Hat JBoss Data Virtualizaton in domain mode. Expand the Security menu, and select Security Domains . Click the View link for the security domain you want to edit. Navigate to the Authentication subsystem configuration. Select the Authentication label at the top of the view if it is not already selected. The configuration area is divided into two areas: Login Modules and Details . The login module is the basic unit of configuration. A security domain can include several login modules, each of which can include several attributes and options. Add an authentication module. Click Add to add a JAAS authentication module. Fill in the details for your module. The Code is the class name of the module. The Flag setting controls how the module relates to other authentication modules within the same security domain. Explanation of the Flags The Java Enterprise Edition 6 specification provides the following explanation of the flags for security modules. The following list is taken from https://docs.oracle.com/javase/6/docs/api/javax/security/auth/login/Configuration.html . Refer to that document for more detailed information. Flag Details required The LoginModule is required to succeed. If it succeeds or fails, authentication still continues to proceed down the LoginModule list. requisite LoginModule is required to succeed. If it succeeds, authentication continues down the LoginModule list. If it fails, control immediately returns to the application (authentication does not proceed down the LoginModule list). sufficient The LoginModule is not required to succeed. If it does succeed, control immediately returns to the application (authentication does not proceed down the LoginModule list). If it fails, authentication continues down the LoginModule list. optional The LoginModule is not required to succeed. If it succeeds or fails, authentication still continues to proceed down the LoginModule list. Edit authentication settings After you have added your module, you can modify its Code or Flags by clicking Edit in the Details section of the screen. Be sure the Attributes tab is selected. Optional: Add or remove module options. If you need to add options to your module, click its entry in the Login Modules list, and select the Module Options tab in the Details section of the page. Click the Add button, and provide the key and value for the option. Use the Remove button to remove an option. Reload the Red Hat JBoss Data Virtualization server to make the authentication module changes available to applications. Result Your authentication module is added to the security domain, and is immediately available to applications which use the security domain. The jboss.security.security_domain Module Option By default, each login module defined in a security domain has the jboss.security.security_domain module option added to it automatically. This option causes problems with login modules which check to make sure that only known options are defined. The IBM Kerberos login module, com.ibm.security.auth.module.Krb5LoginModule is one of these. You can disable the behavior of adding this module option by setting the system property to true when starting JBoss EAP 6. Add the following to your start-up parameters. You can also set this property using the web-based Management Console. On a standalone server, you can set system properties in the System Properties section of the configuration. In a managed domain, you can set system properties for each server group.
[ "-Djboss.security.disable.secdomain.option=true" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/Configure_Authentication_in_a_Security_Domain
Chapter 1. Red Hat JBoss Web Server 6.0 Service Pack 4
Chapter 1. Red Hat JBoss Web Server 6.0 Service Pack 4 Welcome to the Red Hat JBoss Web Server version 6.0 Service Pack 4 release. Red Hat JBoss Web Server is a fully integrated and certified set of components for hosting Java web applications. It consists of an application server (Apache Tomcat servlet container) and the Apache Tomcat Native Library. JBoss Web Server includes the following key components: Apache Tomcat is a servlet container in accordance with the Java Servlet Specification. JBoss Web Server contains Apache Tomcat 10.1. The Apache Tomcat Native Library improves Tomcat scalability, performance, and integration with native server technologies. Tomcat-vault is an extension for the JBoss Web Server that is used for securely storing passwords and other sensitive information used by a JBoss Web Server. The mod_cluster library enables communication between JBoss Web Server and the Apache HTTP Server mod_proxy_cluster module. The mod_cluster library enables you to use the Apache HTTP Server as a load balancer for JBoss Web Server. For more information about configuring mod_cluster , or for information about installing and configuring alternative load balancers such as mod_jk and mod_proxy , see the Apache HTTP Server Connectors and Load Balancing Guide . Apache portable runtime (APR) is a runtime that provides an OpenSSL-based TLS implementation for the HTTP connectors. JBoss Web Server provides a distribution of APR for supported Windows platforms only. For Red Hat Enterprise Linux, you can use the APR package that the operating system provides. OpenSSL is a software library that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols and includes a basic cryptographic library. JBoss Web Server provides a distribution of OpenSSL for supported Windows platforms only. For Red Hat Enterprise Linux, you can use the OpenSSL package that the operating system provides. This release of JBoss Web Server introduces Java 21 support. This release of JBoss Web Server provides OpenShift images based on Red Hat Enterprise Linux 8. Note Red Hat distributes the JBoss Web Server 6.0 Service Pack 4 release in RPM packages only. This service pack release is not available in archive file format.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_4_release_notes/red_hat_jboss_web_server_6_0_service_pack_4
2.32. RHEA-2011:0622 - new package: python-netaddr
2.32. RHEA-2011:0622 - new package: python-netaddr A new python-netaddr package is now available for Red Hat Enterprise Linux 6. The python-netaddr package provides a network address representation and manipulation library for Python. The netaddr library allows Python applications to work with IPv4 and IPv6 addresses, subnetworks, non-aligned IP address ranges and sets, MAC addresses, Organizationally Unique Identifiers (OUI), Individual Address Blocks (IAB), and IEEE EUI-64 identifiers. This enhancement update adds the python-netaddr package to Red Hat Enterprise Linux 6. (BZ# 658557 ) All users who require python-netaddr should install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/python-netaddr_new
Appendix A. Troubleshooting permission issues
Appendix A. Troubleshooting permission issues Satellite upgrades perform pre-upgrade checks. If the pre-upgrade check discovers permission issues, it fails with an error similar to the following one: If you see an error like this on your Satellite Server, identify and remedy the permission issues. Procedure On your Satellite Server, identify permission issues: Fix permission issues: Verification Rerun the check to ensure no permission issues remain:
[ "2024-01-29T20:50:09 [W|app|] Could not create role 'Ansible Roles Manager': ERF73-0602 [Foreman::PermissionMissingException]: some permissions were not found:", "satellite-maintain health check --label duplicate_permissions", "foreman-rake db:seed", "satellite-maintain health check --label duplicate_permissions" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/upgrading_disconnected_red_hat_satellite_to_6.16/troubleshooting-permission-issues
Chapter 4. Verifying OpenShift Data Foundation deployment
Chapter 4. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 4.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 4.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 4.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 4.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_on_vmware_vsphere/verifying_openshift_data_foundation_deployment
Installing Satellite Server in a connected network environment
Installing Satellite Server in a connected network environment Red Hat Satellite 6.16 Install and configure Satellite Server in a network with Internet access Red Hat Satellite Documentation Team [email protected]
[ "nfs.example.com:/nfsshare /var/lib/pulp nfs context=\"system_u:object_r:var_lib_t:s0\" 1 2", "restorecon -R /var/lib/pulp", "firewall-cmd --add-port=\"8000/tcp\" --add-port=\"9090/tcp\"", "firewall-cmd --add-service=dns --add-service=dhcp --add-service=tftp --add-service=http --add-service=https --add-service=puppetmaster", "firewall-cmd --runtime-to-permanent", "firewall-cmd --list-all", "ping -c1 localhost ping -c1 `hostname -f` # my_system.domain.com", "ping -c1 localhost PING localhost (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.043 ms --- localhost ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms ping -c1 `hostname -f` PING hostname.gateway (XX.XX.XX.XX) 56(84) bytes of data. 64 bytes from hostname.gateway (XX.XX.XX.XX): icmp_seq=1 ttl=64 time=0.019 ms --- localhost.gateway ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms", "hostnamectl set-hostname name", "cp /etc/foreman-installer/custom-hiera.yaml /etc/foreman-installer/custom-hiera.original", "satellite-installer --tuning medium", "/Stage[main]/Dhcp/File[/etc/dhcp/dhcpd.conf]: Filebucketed /etc/dhcp/dhcpd.conf to puppet with sum 622d9820b8e764ab124367c68f5fa3a1", "puppet filebucket -l restore /etc/dhcp/dhcpd.conf 622d9820b8e764ab124367c68f5fa3a1", "an http proxy server to use (enter server FQDN) proxy_hostname = http-proxy.example.com port for http proxy server proxy_port = 8080 user name for authenticating to an http proxy, if needed proxy_user = password for basic http proxy auth, if needed proxy_password =", "subscription-manager register", "subscription-manager register Username: user_name Password: The system has been registered with ID: 541084ff2-44cab-4eb1-9fa1-7683431bcf9a", "subscription-manager repos --disable \"*\"", "subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms --enable=rhel-9-for-x86_64-appstream-rpms --enable=satellite-6.16-for-rhel-9-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-9-x86_64-rpms", "dnf repolist enabled", "subscription-manager repos --disable \"*\"", "subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=satellite-6.16-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-8-x86_64-rpms", "dnf module enable satellite:el8", "dnf repolist enabled", "dnf install fapolicyd", "satellite-maintain packages install fapolicyd", "systemctl enable --now fapolicyd", "systemctl status fapolicyd", "dnf upgrade", "dnf install satellite", "satellite-installer --scenario satellite --foreman-initial-organization \" My_Organization \" --foreman-initial-location \" My_Location \" --foreman-initial-admin-username admin_user_name --foreman-initial-admin-password admin_password", "scp ~/ manifest_file .zip root@ satellite.example.com :~/.", "hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \"", "satellite-maintain packages install insights-client", "satellite-installer --register-with-insights", "insights-client --unregister", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=pull-mqtt", "firewall-cmd --add-service=mqtt", "firewall-cmd --runtime-to-permanent", "satellite-maintain packages update grub2-efi", "unset http_proxy https_proxy no_proxy", "hammer http-proxy create --name= My_HTTP_Proxy --username= My_HTTP_Proxy_User_Name --password= My_HTTP_Proxy_Password --url http:// http-proxy.example.com :8080", "hammer settings set --name=content_default_http_proxy --value= My_HTTP_Proxy", "semanage port -l | grep http_cache http_cache_port_t tcp 8080, 8118, 8123, 10001-10010 [output truncated]", "semanage port -a -t http_cache_port_t -p tcp 8088", "hammer settings set --name=http_proxy --value= Proxy_URL", "hammer settings set --name=http_proxy_except_list --value=[ hostname1.hostname2... ]", "hammer settings set --name=content_default_http_proxy --value=\"\"", "satellite-installer --foreman-proxy-bmc \"true\" --foreman-proxy-bmc-default-provider \"freeipmi\"", "satellite-installer --foreman-proxy-dns true --foreman-proxy-dns-managed true --foreman-proxy-dns-zone example.com --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa --foreman-proxy-dhcp true --foreman-proxy-dhcp-managed true --foreman-proxy-dhcp-range \" 192.0.2.100 192.0.2.150 \" --foreman-proxy-dhcp-gateway 192.0.2.1 --foreman-proxy-dhcp-nameservers 192.0.2.2 --foreman-proxy-tftp true --foreman-proxy-tftp-managed true --foreman-proxy-tftp-servername 192.0.2.3", "satellite-installer --foreman-proxy-dhcp false --foreman-proxy-dns false --foreman-proxy-tftp false", "Option 66: IP address of Satellite or Capsule Option 67: /pxelinux.0", "cp mailca.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust enable update-ca-trust", "satellite-installer --certs-cname alternate_fqdn --certs-update-server", "./bootstrap.py --server alternate_fqdn.example.com", "Server hostname: hostname = alternate_fqdn.example.com content omitted Content base URL: baseurl=https:// alternate_fqdn.example.com /pulp/content/", "mkdir /root/satellite_cert", "openssl genrsa -out /root/satellite_cert/satellite_cert_key.pem 4096", "[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] commonName = satellite.example.com [ v3_req ] basicConstraints = CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth, clientAuth, codeSigning, emailProtection subjectAltName = @alt_names [ alt_names ] DNS.1 = satellite.example.com", "[req_distinguished_name] CN = satellite.example.com countryName = My_Country_Name 1 stateOrProvinceName = My_State_Or_Province_Name 2 localityName = My_Locality_Name 3 organizationName = My_Organization_Or_Company_Name organizationalUnitName = My_Organizational_Unit_Name 4", "openssl req -new -key /root/satellite_cert/satellite_cert_key.pem \\ 1 -config /root/satellite_cert/openssl.cnf \\ 2 -out /root/satellite_cert/satellite_cert_csr.pem 3", "satellite-installer --certs-server-cert \" /root/satellite_cert/satellite_cert.pem \" \\ 1 --certs-server-key \" /root/satellite_cert/satellite_cert_key.pem \" \\ 2 --certs-server-ca-cert \" /root/satellite_cert/ca_cert_bundle.pem \" \\ 3 --certs-update-server --certs-update-server-ca", "dnf install http:// satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm", "satellite-installer --certs-reset", "subscription-manager repos --disable \"*\"", "subscription-manager repos --enable=satellite-6.16-for-rhel-9-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-9-x86_64-rpms --enable=rhel-9-for-x86_64-baseos-rpms --enable=rhel-9-for-x86_64-appstream-rpms", "dnf repolist enabled", "subscription-manager repos --disable \"*\"", "subscription-manager repos --enable=satellite-6.16-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "dnf module enable satellite:el8", "dnf repolist enabled", "dnf install postgresql-server postgresql-evr postgresql-contrib", "postgresql-setup initdb", "vi /var/lib/pgsql/data/postgresql.conf", "listen_addresses = '*'", "password_encryption=scram-sha-256", "vi /var/lib/pgsql/data/pg_hba.conf", "host all all Satellite_ip /32 scram-sha-256", "systemctl enable --now postgresql", "firewall-cmd --add-service=postgresql", "firewall-cmd --runtime-to-permanent", "su - postgres -c psql", "CREATE USER \"foreman\" WITH PASSWORD ' Foreman_Password '; CREATE USER \"candlepin\" WITH PASSWORD ' Candlepin_Password '; CREATE USER \"pulp\" WITH PASSWORD ' Pulpcore_Password '; CREATE DATABASE foreman OWNER foreman; CREATE DATABASE candlepin OWNER candlepin; CREATE DATABASE pulpcore OWNER pulp;", "postgres=# \\c pulpcore You are now connected to database \"pulpcore\" as user \"postgres\".", "pulpcore=# CREATE EXTENSION IF NOT EXISTS \"hstore\"; CREATE EXTENSION", "\\q", "PGPASSWORD=' Foreman_Password ' psql -h postgres.example.com -p 5432 -U foreman -d foreman -c \"SELECT 1 as ping\" PGPASSWORD=' Candlepin_Password ' psql -h postgres.example.com -p 5432 -U candlepin -d candlepin -c \"SELECT 1 as ping\" PGPASSWORD=' Pulpcore_Password ' psql -h postgres.example.com -p 5432 -U pulp -d pulpcore -c \"SELECT 1 as ping\"", "satellite-installer --katello-candlepin-manage-db false --katello-candlepin-db-host postgres.example.com --katello-candlepin-db-name candlepin --katello-candlepin-db-user candlepin --katello-candlepin-db-password Candlepin_Password --foreman-proxy-content-pulpcore-manage-postgresql false --foreman-proxy-content-pulpcore-postgresql-host postgres.example.com --foreman-proxy-content-pulpcore-postgresql-db-name pulpcore --foreman-proxy-content-pulpcore-postgresql-user pulp --foreman-proxy-content-pulpcore-postgresql-password Pulpcore_Password --foreman-db-manage false --foreman-db-host postgres.example.com --foreman-db-database foreman --foreman-db-username foreman --foreman-db-password Foreman_Password", "--foreman-db-root-cert <path_to_CA> --foreman-db-sslmode verify-full --foreman-proxy-content-pulpcore-postgresql-ssl true --foreman-proxy-content-pulpcore-postgresql-ssl-root-ca <path_to_CA> --katello-candlepin-db-ssl true --katello-candlepin-db-ssl-ca <path_to_CA> --katello-candlepin-db-ssl-verify true", "scp root@ dns.example.com :/etc/rndc.key /etc/foreman-proxy/rndc.key", "restorecon -v /etc/foreman-proxy/rndc.key chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key chmod -v 640 /etc/foreman-proxy/rndc.key", "echo -e \"server DNS_IP_Address \\n update add aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key nslookup aaa.example.com DNS_IP_Address echo -e \"server DNS_IP_Address \\n update delete aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key", "satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" DNS_IP_Address \" --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key", "dnf install dhcp-server bind-utils", "tsig-keygen -a hmac-md5 omapi_key", "cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100 ; option routers 192.168.38.1 ; option subnet-mask 255.255.255.0 ; option domain-search \" virtual.lan \"; option domain-name \" virtual.lan \"; option domain-name-servers 8.8.8.8 ; } omapi-port 7911; key omapi_key { algorithm hmac-md5; secret \" My_Secret \"; }; omapi-key omapi_key;", "firewall-cmd --add-service dhcp", "firewall-cmd --runtime-to-permanent", "id -u foreman 993 id -g foreman 990", "groupadd -g 990 foreman useradd -u 993 -g 990 -s /sbin/nologin foreman", "chmod o+rx /etc/dhcp/ chmod o+r /etc/dhcp/dhcpd.conf chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf", "systemctl enable --now dhcpd", "dnf install nfs-utils systemctl enable --now nfs-server", "mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp", "/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0", "mount -a", "/exports 192.168.38.1 (rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide)", "exportfs -rva", "firewall-cmd --add-port=7911/tcp", "firewall-cmd --add-service mountd --add-service nfs --add-service rpc-bind --zone public", "firewall-cmd --runtime-to-permanent", "satellite-maintain packages install nfs-utils", "mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd", "chown -R foreman-proxy /mnt/nfs", "showmount -e DHCP_Server_FQDN rpcinfo -p DHCP_Server_FQDN", "DHCP_Server_FQDN :/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcp_etc_t:s0\" 0 0 DHCP_Server_FQDN :/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcpd_state_t:s0\" 0 0", "mount -a", "su foreman-proxy -s /bin/bash cat /mnt/nfs/etc/dhcp/dhcpd.conf cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases exit", "satellite-installer --enable-foreman-proxy-plugin-dhcp-remote-isc --foreman-proxy-dhcp-provider=remote_isc --foreman-proxy-dhcp-server= My_DHCP_Server_FQDN --foreman-proxy-dhcp=true --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key --foreman-proxy-plugin-dhcp-remote-isc-key-secret= My_Secret --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911", "update-ca-trust enable openssl s_client -showcerts -connect infoblox.example.com :443 </dev/null | openssl x509 -text >/etc/pki/ca-trust/source/anchors/infoblox.crt update-ca-trust extract", "curl -u admin:password https:// infoblox.example.com /wapi/v2.0/network", "[ { \"_ref\": \"network/ZG5zLm5ldHdvcmskMTkyLjE2OC4yMDIuMC8yNC8w: infoblox.example.com /24/default\", \"network\": \"192.168.202.0/24\", \"network_view\": \"default\" } ]", "satellite-installer --enable-foreman-proxy-plugin-dhcp-infoblox --foreman-proxy-dhcp true --foreman-proxy-dhcp-provider infoblox --foreman-proxy-dhcp-server infoblox.example.com --foreman-proxy-plugin-dhcp-infoblox-username admin --foreman-proxy-plugin-dhcp-infoblox-password infoblox --foreman-proxy-plugin-dhcp-infoblox-record-type fixedaddress --foreman-proxy-plugin-dhcp-infoblox-dns-view default --foreman-proxy-plugin-dhcp-infoblox-network-view default", "satellite-installer --enable-foreman-proxy-plugin-dns-infoblox --foreman-proxy-dns true --foreman-proxy-dns-provider infoblox --foreman-proxy-plugin-dns-infoblox-dns-server infoblox.example.com --foreman-proxy-plugin-dns-infoblox-username admin --foreman-proxy-plugin-dns-infoblox-password infoblox --foreman-proxy-plugin-dns-infoblox-dns-view default", "mkdir -p /mnt/nfs/var/lib/tftpboot", "TFTP_Server_IP_Address :/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context=\"system_u:object_r:tftpdir_rw_t:s0\" 0 0", "mount -a", "satellite-installer --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot --foreman-proxy-tftp=true", "satellite-installer --foreman-proxy-tftp-servername= TFTP_Server_FQDN", "kinit idm_user", "ipa service-add capsule/satellite.example.com", "satellite-maintain packages install ipa-client", "ipa-client-install", "kinit admin", "rm /etc/foreman-proxy/dns.keytab", "ipa-getkeytab -p capsule/ [email protected] -s idm1.example.com -k /etc/foreman-proxy/dns.keytab", "chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab", "kinit -kt /etc/foreman-proxy/dns.keytab capsule/ [email protected]", "grant capsule\\047 [email protected] wildcard * ANY;", "grant capsule\\047 [email protected] wildcard * ANY;", "satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns=true", "######################################################################## include \"/etc/rndc.key\"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { \"rndc-key\"; }; }; ########################################################################", "systemctl reload named", "grant \"rndc-key\" zonesub ANY;", "scp /etc/rndc.key root@ satellite.example.com :/etc/rndc.key", "restorecon -v /etc/rndc.key chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key", "usermod -a -G named foreman-proxy", "satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" IdM_Server_IP_Address \" --foreman-proxy-dns-ttl=86400 --foreman-proxy-dns=true --foreman-proxy-keyfile=/etc/rndc.key", "key \"rndc-key\" { algorithm hmac-md5; secret \" secret-key ==\"; };", "echo -e \"server 192.168.25.1\\n update add test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key", "nslookup test.example.com 192.168.25.1", "Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20", "echo -e \"server 192.168.25.1\\n update delete test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key", "nslookup test.example.com 192.168.25.1", "satellite-installer", "satellite-installer --foreman-proxy-dns-managed=true --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\"127.0.0.1\" --foreman-proxy-dns=true", "satellite-maintain packages install ipa-client", "ipa-client-install", "foreman-prepare-realm admin realm-capsule", "scp /root/freeipa.keytab root@ capsule.example.com :/etc/foreman-proxy/freeipa.keytab", "mv /root/freeipa.keytab /etc/foreman-proxy chown foreman-proxy:foreman-proxy /etc/foreman-proxy/freeipa.keytab", "satellite-installer --foreman-proxy-realm true --foreman-proxy-realm-keytab /etc/foreman-proxy/freeipa.keytab --foreman-proxy-realm-principal [email protected] --foreman-proxy-realm-provider freeipa", "cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt update-ca-trust enable update-ca-trust", "systemctl restart foreman-proxy", "ipa hostgroup-add hostgroup_name --desc= hostgroup_description", "ipa automember-add --type=hostgroup hostgroup_name automember_rule", "ipa automember-add-condition --key=userclass --type=hostgroup --inclusive-regex= ^webserver hostgroup_name ---------------------------------- Added condition(s) to \" hostgroup_name \" ---------------------------------- Automember Rule: automember_rule Inclusive Regex: userclass= ^webserver ---------------------------- Number of conditions added 1 ----------------------------", "dnf module list --enabled", "dnf module list --enabled", "dnf module reset ruby", "dnf module list --enabled", "dnf module reset postgresql", "dnf module enable satellite:el8", "dnf install postgresql-upgrade", "postgresql-setup --upgrade", "apache::server_tokens: Prod", "apache::server_signature: Off", "cp /etc/dhcp/dhcpd.conf /etc/dhcp/dhcpd.backup", "journalctl -xe /Stage[main]/Dhcp/File[/etc/dhcp/dhcpd.conf]: Filebucketed /etc/dhcp/dhcpd.conf to puppet with sum 622d9820b8e764ab124367c68f5fa3a1", "puppet filebucket restore --local --bucket /var/lib/puppet/clientbucket /etc/dhcp/dhcpd.conf \\ 622d9820b8e764ab124367c68f5fa3a1" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html-single/installing_satellite_server_in_a_connected_network_environment/index
6.5. Removing Lost Physical Volumes from a Volume Group
6.5. Removing Lost Physical Volumes from a Volume Group If you lose a physical volume, you can activate the remaining physical volumes in the volume group with the --partial argument of the vgchange command. You can remove all the logical volumes that used that physical volume from the volume group with the --removemissing argument of the vgreduce command. You should run the vgreduce command with the --test argument to verify what you will be destroying. Like most LVM operations, the vgreduce command is reversible if you immediately use the vgcfgrestore command to restore the volume group metadata to its state. For example, if you used the --removemissing argument of the vgreduce command without the --test argument and find you have removed logical volumes you wanted to keep, you can still replace the physical volume and use another vgcfgrestore command to return the volume group to its state.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lost_PV_remove_from_VG
Chapter 3. Distribution of content in RHEL 8
Chapter 3. Distribution of content in RHEL 8 3.1. Installation Red Hat Enterprise Linux 8 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Binary DVD ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. Note The Binary DVD ISO image is larger than 4.7 GB, and as a result, it might not fit on a single-layer DVD. A dual-layer DVD or USB key is recommended when using the Binary DVD ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Binary DVD ISO image. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. 3.2. Repositories Red Hat Enterprise Linux 8 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For a list of packages distributed through BaseOS, see the Package manifest . Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules , or as Software Collections. For a list of packages available in AppStream, see the Package manifest . In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 8 repositories, see the Package manifest . 3.3. Application Streams Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life Cycle . Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, several streams (versions) of the PostgreSQL database server are available in the postgresql module with the default postgresql:10 stream. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest . 3.4. Package management with YUM/DNF On Red Hat Enterprise Linux 8, installing software is ensured by the YUM tool, which is based on the DNF technology. We deliberately adhere to usage of the yum term for consistency with major versions of RHEL. However, if you type dnf instead of yum , the command works as expected because yum is an alias to dnf for compatibility. For more details, see the following documentation: Installing, managing, and removing user-space components Considerations in adopting RHEL 8
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.2_release_notes/Distribution-of-content-in-RHEL-8
Providing feedback on Workload Availability for Red Hat OpenShift documentation
Providing feedback on Workload Availability for Red Hat OpenShift documentation We appreciate your feedback on our documentation. Let us know how we can improve it. To do so: Go to the JIRA website. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Enter your username in the Reporter field. Enter the affected versions in the Affects Version/s field. Click Create at the bottom of the dialog.
null
https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/24.4/html/remediation_fencing_and_maintenance/proc_providing-feedback-on-workload-availability-for-red-hat-openshift-documentation_preface
9.5. Creating Rules
9.5. Creating Rules Rules determine what certificate object is published in what location. Rules work independently, not in tandem. A certificate or CRL that is being published is matched against every rule. Any rule which it matches is activated. In this way, the same certificate or CRL can be published to a file, to an Online Certificate Status Manager, and to an LDAP directory by matching a file-based rule, an OCSP rule, and matching a directory-based rule. Rules can be set for each object type: CA certificates, CRLs, user certificates, and cross-pair certificates. The rules can be more detailed for different kinds of certificates or different kinds of CRLs. The rule first determines if the object matches by matching the type and predicate set up in the rule with the object. Where matching objects are published is determined by the publisher and mapper associated with the rule. Rules are created for each type of certificate the Certificate Manager issues. Modify publishing rules by doing the following: Log into the Certificate Manager Console. In the Configuration tab, select Certificate Manager from the navigation tree on the left. Select Publishing , and then Rules . The Rules Management tab, which lists configured rules, opens on the right. To edit an existing rule, select that rule from the list, and click Edit . This opens the Rule Editor window. To create a rule, click Add . This opens the Select Rule Plug-in Implementation window. Select the Rule module. This is the only default module. If any custom modules have been been registered, they are also available. Edit the rule. type . This is the type of certificate for which the rule applies. For a CA signing certificate, the value is cacert . For a cross-signed certificate, the value is xcert . For all other types of certificates, the value is certs . For CRLs, specify crl . predicate . This sets the predicate value for the type of certificate or CRL issuing point to which this rule applies. The predicate values for CRL issuing points, delta CRLs, and certificates are listed in Table 9.3, "Predicate Expressions" . enable . mapper . Mappers are not necessary when publishing to a file; they are only needed for LDAP publishing. If this rule is associated with a publisher that publishes to an LDAP directory, select an appropriate mapper here. Leave blank for all other forms of publishing. publisher . Sets the publisher to associate with the rule. Table 9.3, "Predicate Expressions" lists the predicates that can be used to identify CRL issuing points and delta CRLs and certificate profiles. Table 9.3. Predicate Expressions Predicate Type Predicate CRL Issuing Point issuingPointId== Issuing_Point_Instance_ID && isDeltaCRL==[true|false] To publish only the master CRL, set isDeltaCRL==false . To publish only the delta CRL, set isDeltaCRL==true . To publish both, set a rule for the master CRL and another rule for the delta CRL. Certificate Profile profileId== profile_name To publish certificates based on the profile used to issue them, set profileId== to a profile name, such as caServerCert . Note pkiconsole is being deprecated.
[ "pkiconsole https://server.example.com:8443/ca" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Creating_Rules
13.5. Installing in the Graphical User Interface
13.5. Installing in the Graphical User Interface The graphical installation interface is the preferred method of manually installing Red Hat Enterprise Linux. It allows you full control over all available settings, including custom partitioning and advanced storage configuration, and it is also localized to many languages other than English, allowing you to perform the entire installation in a different language. The graphical mode is used by default when you boot the system from local media (a CD, DVD or a USB flash drive). Figure 13.2. The Installation Summary Screen The sections below discuss each screen available in the installation process. Note that due to the installer's parallel nature, most of the screens do not have to be completed in the order in which they are described here. Each screen in the graphical interface contains a Help button. This button opens the Yelp help browser displaying the section of the Red Hat Enterprise Linux Installation Guide relevant to the current screen. You can also control the graphical installer with your keyboard. Following table shows you the shortcuts you can use. Table 13.2. Graphical installer keyboard shortcuts Shortcut keys Usage Tab and Shift + Tab Cycle through active control elements (buttons, check boxes, and so on.) on the current screen Up and Down Scroll through lists Left and Right Scroll through horizontal toolbars and table entries Space and Enter Select or remove a highlighted item from selection and expand and collapse drop-down menus Additionally, elements in each screen can be toggled using their respective shortcuts. These shortcuts are highlighted (underlined) when you hold down the Alt key; to toggle that element, press Alt + X , where X is the highlighted letter. Your current keyboard layout is displayed in the top right hand corner. Only one layout is configured by default; if you configure more than layout in the Keyboard Layout screen ( Section 13.10, "Keyboard Configuration" ), you can switch between them by clicking the layout indicator.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-installation-graphical-mode-ppc
7.3. Configuring Identity and Authentication Providers for SSSD
7.3. Configuring Identity and Authentication Providers for SSSD 7.3.1. Introduction to Identity and Authentication Providers for SSSD SSSD Domains. Identity and Authentication Providers Identity and authentication providers are configured as domains in the SSSD configuration file. A single domain can be used as: An identity provider (for user information) An authentication provider (for authentication requests) An access control provider (for authorization requests) A combination of these providers (if all the corresponding operations are performed within a single server) You can configure multiple domains for SSSD. At least one domain must be configured, otherwise SSSD will not start. The access_provider option in the /etc/sssd/sssd.conf file sets the access control provider used for the domain. By default, the option is set to permit , which always allows all access. See the sssd.conf (5) man page for details. Proxy Providers A proxy provider works as an intermediary relay between SSSD and resources that SSSD would otherwise not be able to use. When using a proxy provider, SSSD connects to the proxy service, and the proxy loads the specified libraries. Using a proxy provider, you can configure SSSD to use: Alternative authentication methods, such as a fingerprint scanner Legacy systems, such as NIS A local system account defined in /etc/passwd and remote authentication Available Combinations of Identity and Authentication Providers Table 7.1. Available Combinations of Identity and Authentication Providers Identity Provider Authentication Provider Identity Management [a] Identity Management [a] Active Directory [a] Active Directory [a] LDAP LDAP LDAP Kerberos proxy proxy proxy LDAP proxy Kerberos [a] An extension of the LDAP provider type. Note that this guide does not describe all provider types. See the following additional resources for more information: To configure an SSSD client for Identity Management, Red Hat recommends using the ipa-client-install utility. See Installing and Uninstalling Identity Management Clients in the Linux Domain Identity, Authentication, and Policy Guide . To configure an SSSD client for Identity Management manually without ipa-client-install , see Installing and Uninstalling an Identity Management Client Manually in Red Hat Knowledgebase. To configure Active Directory to be used with SSSD, see Using Active Directory as an Identity Provider for SSSD in the Windows Integration Guide . 7.3.2. Configuring an LDAP Domain for SSSD Prerequisites Install SSSD. Configure SSSD to Discover the LDAP Domain Open the /etc/sssd/sssd.conf file. Create a [domain] section for the LDAP domain: Specify if you want to use the LDAP server as an identity provider, an authentication provider, or both. To use the LDAP server as an identity provider, set the id_provider option to ldap . To use the LDAP server as an authentication provider, set the auth_provider option to ldap . For example, to use the LDAP server as both: Specify the LDAP server. Choose one of the following: To explicitly define the server, specify the server's URI with the ldap_uri option: The ldap_uri option also accepts the IP address of the server. However, using an IP address instead of the server name might cause TLS/SSL connections to fail. See Configuring an SSSD Provider to Use an IP Address in the Certificate Subject Name in Red Hat Knowledgebase. To configure SSSD to discover the server dynamically using DNS service discovery, see Section 7.4.3, "Configuring DNS Service Discovery" . Optionally, specify backup servers in the ldap_backup_uri option as well. Specify the LDAP server's search base in the ldap_search_base option: Specify a way to establish a secure connection to the LDAP server. The recommended method is to use a TLS connection. To do this, enable the ldap_id_use_start_tls option, and use these CA certificate-related options: ldap_tls_reqcert specifies if the client requests a server certificate and what checks are performed on the certificate ldap_tls_cacert specifies the file containing the certificate Note SSSD always uses an encrypted channel for authentication, which ensures that passwords are never sent over the network unencrypted. With ldap_id_use_start_tls = true , identity lookups (such as commands based on the id or getent utilities) are also encrypted. Add the new domain to the domains option in the [sssd] section. The option lists the domains that SSSD queries. For example: Additional Resources The above procedure shows the basic options for an LDAP provider. For more details, see: the sssd.conf (5) man page, which describes global options available for all types of domains the sssd-ldap (5) man page, which describes options specific to LDAP 7.3.3. Configuring the Files Provider for SSSD The files provider mirrors the content of the /etc/passwd and /etc/groups files to make users and groups from these files available through SSSD. This enables you to set the sss database as the first source for users and groups in the /etc/nsswitch.conf file: With this setting, and if the files provider is configured in /etc/sssd/sssd.conf , Red Hat Enterprise Linux sends all queries for users and groups first to SSSD. If SSSD is not running or SSSD cannot find the requested entry, the system falls back to look up users and groups in the local files. If you store most users and groups in a central database, such as an LDAP directory, this setting increases speed of users and groups lookups. Prerequisites Install SSSD. Configure SSSD to Discover the Files Domain Add the following section to the /etc/sssd/sssd.conf file: Optionally, set the sss database as the first source for user and group lookups in the /etc/sssd/sssd.conf file: Configure the system in the way that the sssd service starts when the system boots: Restart the sssd service: Additional Resources The above procedure shows the basic options for the files provider. For more details, see: the sssd.conf (5) man page, which describes global options available for all types of domains the sssd-files (5) man page, which describes options specific to the files provider 7.3.4. Configuring a Proxy Provider for SSSD Prerequisites Install SSSD. Configure SSSD to Discover the Proxy Domain Open the /etc/sssd/sssd.conf file. Create a [domain] section for the proxy provider: To specify an authentication provider: Set the auth_provider option to proxy . Use the proxy_pam_target option to specify a PAM service as the authentication proxy. For example: Important Ensure that the proxy PAM stack does not recursively include pam_sss.so . To specify an identity provider: Set the id_provider option to proxy . Use the proxy_lib_name option to specify an NSS library as the identity proxy. For example: Add the new domain to the domains option in the [sssd] section. The option lists the domains that SSSD queries. For example: Additional Resources The above procedure shows the basic options for a proxy provider. For more details, see the sssd.conf (5) man page, which describes global options available for all types of domains and other proxy-related options. 7.3.5. Configuring a Kerberos Authentication Provider Prerequisites Install SSSD. Configure SSSD to Discover the Kerberos Domain Open the /etc/sssd/sssd.conf file. Create a [domain] section for the SSSD domain. Specify an identity provider. For example, for details on configuring an LDAP identity provider, see Section 7.3.2, "Configuring an LDAP Domain for SSSD" . If the Kerberos principal names are not available in the specified identity provider, SSSD constructs the principals using the format username@REALM . Specify the Kerberos authentication provider details: Set the auth_provider option to krb5 . Specify the Kerberos server: To explicitly define the server, use the krb5_server option. The options accepts the host name or IP address of the server: To configure SSSD to discover the server dynamically using DNS service discovery, see Section 7.4.3, "Configuring DNS Service Discovery" . Optionally, specify backup servers in the krb5_backup_server option as well. If the Change Password service is not running on the KDC specified in krb5_server or krb5_backup_server , use the krb5_passwd option to specify the server where the service is running. If krb5_passwd is not used, SSSD uses the KDC specified in krb5_server or krb5_backup_server . Use the krb5_realm option to specify the name of the Kerberos realm. Add the new domain to the domains option in the [sssd] section. The option lists the domains that SSSD queries. For example: Additional Resources The above procedure shows the basic options for a Kerberos provider. For more details, see: the sssd.conf (5) man page, which describes global options available for all types of domains the sssd-krb5 (5) man page, which describes options specific to Kerberos
[ "yum install sssd", "[domain/ LDAP_domain_name ]", "[domain/ LDAP_domain_name ] id_provider = ldap auth_provider = ldap", "[domain/ LDAP_domain_name ] id_provider = ldap auth_provider = ldap ldap_uri = ldap://ldap.example.com", "[domain/ LDAP_domain_name ] id_provider = ldap auth_provider = ldap ldap_uri = ldap://ldap.example.com ldap_search_base = dc= example ,dc= com", "[domain/ LDAP_domain_name ] id_provider = ldap auth_provider = ldap ldap_uri = ldap://ldap.example.com ldap_search_base = dc=example,dc=com ldap_id_use_start_tls = true ldap_tls_reqcert = demand ldap_tls_cacert = /etc/pki/tls/certs/ca-bundle.crt", "domains = LDAP_domain_name , domain2", "passwd: sss files group: sss files", "yum install sssd", "[domain/files] id_provider = files", "passwd: sss files group: sss files", "systemctl enable sssd", "systemctl restart sssd", "yum install sssd", "[domain/ proxy_name ]", "[domain/ proxy_name ] auth_provider = proxy proxy_pam_target = sssdpamproxy", "[domain/ proxy_name ] id_provider = proxy proxy_lib_name = nis", "domains = proxy_name , domain2", "yum install sssd", "[domain/ Kerberos_domain_name ]", "[domain/ Kerberos_domain_name ] id_provider = ldap auth_provider = krb5", "[domain/ Kerberos_domain_name ] id_provider = ldap auth_provider = krb5 krb5_server = kdc.example.com", "[domain/ Kerberos_domain_name ] id_provider = ldap auth_provider = krb5 krb5_server = kdc.example.com krb5_backup_server = kerberos.example.com krb5_passwd = kerberos.admin.example.com", "[domain/ Kerberos_domain_name ] id_provider = ldap auth_provider = krb5 krb5_server = kerberos.example.com krb5_backup_server = kerberos2.example.com krb5_passwd = kerberos.admin.example.com krb5_realm = EXAMPLE.COM", "domains = Kerberos_domain_name , domain2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/configuring_domains
Chapter 91. JarArtifact schema reference
Chapter 91. JarArtifact schema reference Used in: Plugin Property Property type Description url string URL of the artifact which will be downloaded. Streams for Apache Kafka does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required for jar , zip , tgz and other artifacts. Not applicable to the maven artifact type. sha512sum string SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. Not applicable to the maven artifact type. insecure boolean By default, connections using TLS are verified to check they are secure. The server certificate used must be valid, trusted, and contain the server name. By setting this option to true , all TLS verification is disabled and the artifact will be downloaded, even when the server is considered insecure. type string Must be jar .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-jarartifact-reference
Part VI. Manage
Part VI. Manage
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_interconnect/management
Getting Started with Red Hat build of Apache Camel for Spring Boot
Getting Started with Red Hat build of Apache Camel for Spring Boot Red Hat build of Apache Camel 4.4
[ "<dependencyManagement> <dependencies> <!-- Camel BOM --> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>4.4.0.redhat-00039</version> <type>pom</type> <scope>import</scope> </dependency> <!-- ... other BOMs or dependencies ... --> </dependencies> </dependencyManagement>", "<dependencies> <!-- Camel Starter --> <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-boot-starter</artifactId> </dependency> <!-- ... other dependencies ... --> </dependencies>", "<dependencies> <!-- ... other dependencies ... --> <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-paho-mqtt5</artifactId> </dependency> </dependencies>", "camel.component.paho-mqtt5.broker-url=tcp://localhost:61616", "import org.apache.camel.builder.RouteBuilder; import org.springframework.stereotype.Component; @Component public class MyRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"...\") .to(\"...\"); } }", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-boot</artifactId> <version>4.4.0.redhat-00039</version> <!-- use the same version as your Camel core version --> </dependency>", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>4.4.0.redhat-00039</version> <!-- use the same version as your Camel core version --> </dependency>", "package com.example; import org.apache.camel.builder.RouteBuilder; import org.springframework.stereotype.Component; @Component public class MyRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:foo\").to(\"log:bar\"); } }", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-boot-starter</artifactId> <version>4.4.0.redhat-00039</version> <!-- use the same version as your Camel core version --> </dependency>", "@Configuration public class MyAppConfig { @Autowired CamelContext camelContext; @Bean MyService myService() { return new DefaultMyService(camelContext); } }", "@Component public class MyRouter extends RouteBuilder { @Override public void configure() throws Exception { from(\"jms:invoices\").to(\"file:/invoices\"); } }", "@Configuration public class MyRouterConfiguration { @Bean RoutesBuilder myRouter() { return new RouteBuilder() { @Override public void configure() throws Exception { from(\"jms:invoices\").to(\"file:/invoices\"); } }; } }", "route.from = jms:invoices", "java -Droute.to=jms:processed.invoices -jar mySpringApp.jar", "@Component public class MyRouter extends RouteBuilder { @Override public void configure() throws Exception { from(\"{{route.from}}\").to(\"{{route.to}}\"); } }", "@Configuration public class MyAppConfig { @Bean CamelContextConfiguration contextConfiguration() { return new CamelContextConfiguration() { @Override void beforeApplicationStart(CamelContext context) { // your custom configuration goes here } }; } }", "@Component public class InvoiceProcessor { @Autowired private ProducerTemplate producerTemplate; @Autowired private ConsumerTemplate consumerTemplate; public void processNextInvoice() { Invoice invoice = consumerTemplate.receiveBody(\"jms:invoices\", Invoice.class); producerTemplate.sendBody(\"netty-http:http://invoicing.com/received/\" + invoice.id()); } }", "camel.springboot.consumer-template-cache-size = 100 camel.springboot.producer-template-cache-size = 200", "@Component public class InvoiceProcessor { @Autowired private TypeConverter typeConverter; public long parseInvoiceValue(Invoice invoice) { String invoiceValue = invoice.grossValue(); return typeConverter.convertTo(Long.class, invoiceValue); } }", "@Component public class InvoiceProcessor { @Autowired private TypeConverter typeConverter; public UUID parseInvoiceId(Invoice invoice) { // Using Spring's StringToUUIDConverter UUID id = invoice.typeConverter.convertTo(UUID.class, invoice.getId()); } }", "camel.springboot.main-run-controller = true", "turn off camel.springboot.routes-include-pattern = false", "scan only in the com/foo/routes classpath camel.springboot.routes-include-pattern = classpath:com/foo/routes/*.xml", "<routes xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"test\"> <from uri=\"timer://trigger\"/> <transform> <simple>ref:myBean</simple> </transform> <to uri=\"log:out\"/> </route> </routes>", "<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <version>3.2.11</version> <!-- Use the same version as your Spring Boot version --> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-test-spring-junit5</artifactId> <version>4.4.0.redhat-00046</version> <!-- use the same version as your Camel core version --> <scope>test</scope> </dependency>", "@CamelSpringBootTest @SpringBootApplication @MockEndpoints(\"direct:end\") public class MyApplicationTest { @Autowired private ProducerTemplate template; @EndpointInject(\"mock:direct:end\") private MockEndpoint mock; @Test public void testReceive() throws Exception { mock.expectedBodiesReceived(\"Hello\"); template.sendBody(\"direct:start\", \"Hello\"); mock.assertIsSatisfied(); } }", "camel.component.paho-mqtt5.broker-url=tcp://localhost:61616", "camel.dataformat.csv.delimiter=;", "camel.component.jms.transactionManager=#bean:myjtaTransactionManager", "@Bean(\"myjtaTransactionManager\") public JmsTransactionManager myjtaTransactionManager(PooledConnectionFactory pool) { JmsTransactionManager manager = new JmsTransactionManager(pool); manager.setDefaultTimeout(45); return manager; }", "@Bean(\"kafka\") public KafkaComponent kafka(KafkaConfiguration kafkaconfiguration){ return ComponentsBuilderFactory.kafka() .brokers(\"{{kafka.host}}:{{kafka.port}}\") .build(); }", "mvn archetype:generate -DarchetypeGroupId=org.apache.camel.archetypes -DarchetypeArtifactId=camel-archetype-spring-boot -DarchetypeVersion=4.4.0.redhat-00039 -DgroupId=com.redhat -DartifactId=csb-app -Dversion=1.0-SNAPSHOT -DinteractiveMode=false", "mvn package -f csb-app/pom.xml", "java -jar csb-app/target/csb-app-1.0-SNAPSHOT.jar", "com.redhat.MySpringBootApplication : Started MySpringBootApplication in 3.514 seconds (JVM running for 4.006) Hello World Hello World", "mvn clean -DskipTests oc:deploy -Popenshift", "logs -f dc/csb-app", "<build> <plugins> <<plugin> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>patch-maven-plugin</artifactId> <version>USD{camel-spring-boot-version}</version> <extensions>true</extensions> </plugin> </plugins> </build>", "mvn clean install [INFO] Scanning for projects [INFO] ========== Red Hat Maven patching ========== [INFO] [PATCH] No project in the reactor uses Camel on Spring Boot product BOM. Skipping patch processing. [INFO] [PATCH] Done in 7ms =================================================", "mvn clean install [INFO] Scanning for projects [INFO] ========== Red Hat Maven patching ========== [INFO] [PATCH] Reading patch metadata and artifacts from 2 project repositories [INFO] [PATCH] - redhat-ga-repository: http://maven.repository.redhat.com/ga/ [INFO] [PATCH] - central: https://repo.maven.apache.org/maven2 Downloading from redhat-ga-repository: http://maven.repository.redhat.com/ga/com/redhat/camel/springboot/platform/redhat-camel-spring-boot-patch-metadata/maven-metadata.xml Downloading from central: https://repo.maven.apache.org/maven2/com/redhat/camel/springboot/platform/redhat-camel-spring-boot-patch-metadata/maven-metadata.xml [INFO] [PATCH] Resolved patch descriptor: /path/to/.m2/repository/com/redhat/camel/springboot/platform/redhat-camel-spring-boot-patch-metadata/3.20.1.redhat-00043/redhat-camel-spring-boot-patch-metadata-3.20.1.redhat-00043.xml [INFO] [PATCH] Patch metadata found for com.redhat.camel.springboot.platform/camel-spring-boot-bom/[3.20,3.21) [INFO] [PATCH] Done in 938ms =================================================", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <metadata> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>redhat-camel-spring-boot-patch-metadata</artifactId> <versioning> <release>3.20.1.redhat-00041</release> <versions> <version>3.20.1.redhat-00041</version> </versions> <lastUpdated>20230322103858</lastUpdated> </versioning> </metadata>", "mvn clean install [INFO] Scanning for projects [INFO] ========== Red Hat Maven patching ========== [INFO] [PATCH] Reading patch metadata and artifacts from 2 project repositories [INFO] [PATCH] - MRRC-GA: https://maven.repository.redhat.com/ga [INFO] [PATCH] - central: https://repo.maven.apache.org/maven2", "<?xml version=\"1.0\" encoding=\"UTF-8\" ?> <<metadata xmlns=\"urn:redhat:patch-metadata:1\"> <product-bom groupId=\"com.redhat.camel.springboot.platform\" artifactId=\"camel-spring-boot-bom\" versions=\"[3.20,3.21)\" /> <cves> </cves> <fixes> <fix id=\"HF0-1\" description=\"logback-classic (Example) - Version Bump\"> <affects groupId=\"ch.qos.logback\" artifactId=\"logback-classic\" versions=\"[1.0,1.3.0)\" fix=\"1.3.0\" /> </fix> </fixes> </metadata>", "mvn dependency:tree [INFO] Scanning for projects [INFO] ========== Red Hat Maven patching ========== [INFO] [PATCH] Reading patch metadata and artifacts from 3 project repositories [INFO] [PATCH] - redhat-ga-repository: http://maven.repository.redhat.com/ga/ [INFO] [PATCH] - local: file:///path/to/.m2/repository [INFO] [PATCH] - central: https://repo.maven.apache.org/maven2 [INFO] [PATCH] Resolved patch descriptor:/path/to/.m2/repository/com/redhat/camel/springboot/platform/redhat-camel-spring-boot-patch-metadata/3.20.1.redhat-00043/redhat-camel-spring-boot-patch-metadata-3.20.1.redhat-00043.xml [INFO] [PATCH] Patch metadata found for com.redhat.camel.springboot.platform/camel-spring-boot-bom/[3.20,3.21) [INFO] [PATCH] - patch contains 1 patch fix [INFO] [PATCH] Processing managed dependencies to apply patch fixes [INFO] [PATCH] - HF0-1: logback-classic (Example) - Version Bump [INFO] [PATCH] Applying change ch.qos.logback/logback-classic/[1.0,1.3.0) -> 1.3.0 [INFO] [PATCH] Project com.test:yaml-routes [INFO] [PATCH] - managed dependency: ch.qos.logback/logback-classic/1.2.11 -> 1.3.0 [INFO] [PATCH] Done in 39ms =================================================", "<build> <plugins> <plugin> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>patch-maven-plugin</artifactId> <version>USD{camel-spring-boot-version}</version> <extensions>true</extensions> <configuration> <skip>true</skip> </configuration> </plugin> </plugins> </build>", "mvn clean install -DskipPatch [INFO] Scanning for projects [INFO] [INFO] -------------------------< com.example:test-csb >------------------------- [INFO] Building A Camel Spring Boot Route 1.0-SNAPSHOT", "<build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <plugin> <groupId>org.apache.camel</groupId> <artifactId>camel-restdsl-openapi-plugin</artifactId> <version>{CamelCommunityVersion}</version> </plugin> </plugins> </build>", "USDmvn camel-restdsl-openapi:generate", "<dependency> <groupId>com.google.code.gson</groupId> <artifactId>gson</artifactId> <version>2.10.1</version> </dependency> <dependency> <groupId>io.swagger.core.v3</groupId> <artifactId>swagger-core</artifactId> <version>2.2.8</version> </dependency> <dependency> <groupId>org.threeten</groupId> <artifactId>threetenbp</artifactId> <version>1.6.8</version> </dependency>", "<dependency> <groupId>com.google.code.gson</groupId> <artifactId>gson</artifactId> <version>2.10.1</version> </dependency> <dependency> <groupId>io.swagger.core.v3</groupId> <artifactId>swagger-core</artifactId> <version>2.2.8</version> </dependency> <dependency> <groupId>org.threeten</groupId> <artifactId>threetenbp</artifactId> <version>1.6.8</version> </dependency>", "<dependency> <groupId>com.google.code.gson</groupId> <artifactId>gson</artifactId> <version>2.10.1</version> </dependency> <dependency> <groupId>io.swagger.core.v3</groupId> <artifactId>swagger-core</artifactId> <version>2.2.8</version> </dependency> <dependency> <groupId>org.threeten</groupId> <artifactId>threetenbp</artifactId> <version>1.6.8</version> </dependency>", "mvn --version", "<?xml version=\"1.0\"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>atlassian</id> <url>https://packages.atlassian.com/artifactory/maven-public/</url> <name>atlassian external repo</name> <snapshots> <enabled>false</enabled> </snapshots> <releases> <enabled>true</enabled> </releases> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings>", "├── README ├── build-offline-repo.sh ├── errors.log ├── logback.xml ├── maven-repositories.txt ├── offliner-2.0-sources.jar ├── offliner-2.0-sources.jar.md5 ├── offliner-2.0.jar ├── offliner-2.0.jar.md5 ├── offliner.log ├── rhaf-camel-offliner-4.4.0.txt └── rhaf-camel-spring-boot-offliner-4.4.0.txt", "mvn org.apache.maven.plugins:maven-dependency-plugin:3.1.0:go-offline -Dmaven.repo.local=/tmp/my-project", "<mirror> <id>all</id> <mirrorOf>*</mirrorOf> <url>http://host:port/path</url> </mirror>", "groupId:artifactId:version groupId:artifactId:packaging:version groupId:artifactId:packaging:classifier:version", "<project ... > <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> </project>", "<project ... > <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> </project>", "login --user system:admin --token=my-token --server=https://my-cluster.example.com:6443", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true", "oc -n openshift-user-workload-monitoring get pod Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h", "<profiles> <profile> <id>openshift</id> <build> <plugins> <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>1.13.1</version> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles>", "expose actuator endpoint via HTTP management.endpoints.web.exposure.include=mappings,metrics,health,shutdown,jolokia,prometheus", "<dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-prometheus</artifactId> </dependency> <dependency> <groupId>org.jolokia</groupId> <artifactId>jolokia-core</artifactId> <version>USD{jolokia-version}</version> </dependency> <dependency> <groupId>io.prometheus.jmx</groupId> <artifactId>collector</artifactId> <version>USD{prometheus-version}</version> </dependency>", "import org.springframework.context.annonation.Bean; import org.apache.camel.component.micrometer.MicrometerConstants; import org.apache.camel.component.micrometer.eventnotifier.MicrometerExchangeEventNotifier; import org.apache.camel.component.micrometer.eventnotifier.MicrometerRouteEventNotifier; import org.apache.camel.component.micrometer.messagehistory.MicrometerMessageHistoryFactory; import org.apache.camel.component.micrometer.routepolicy.MicrometerRoutePolicyFactory;", "@SpringBootApplication public class SampleCamelApplication { @Bean(name = {MicrometerConstants.METRICS_REGISTRY_NAME, \"prometheusMeterRegistry\"}) public PrometheusMeterRegistry prometheusMeterRegistry( PrometheusConfig prometheusConfig, CollectorRegistry collectorRegistry, Clock clock) throws MalformedObjectNameException, IOException { InputStream resource = new ClassPathResource(\"config/prometheus_exporter_config.yml\").getInputStream(); new JmxCollector(resource).register(collectorRegistry); new BuildInfoCollector().register(collectorRegistry); return new PrometheusMeterRegistry(prometheusConfig, collectorRegistry, clock); } @Bean public CamelContextConfiguration camelContextConfiguration(@Autowired PrometheusMeterRegistry registry) { return new CamelContextConfiguration() { @Override public void beforeApplicationStart(CamelContext camelContext) { MicrometerRoutePolicyFactory micrometerRoutePolicyFactory = new MicrometerRoutePolicyFactory(); micrometerRoutePolicyFactory.setMeterRegistry(registry); camelContext.addRoutePolicyFactory(micrometerRoutePolicyFactory); MicrometerMessageHistoryFactory micrometerMessageHistoryFactory = new MicrometerMessageHistoryFactory(); micrometerMessageHistoryFactory.setMeterRegistry(registry); camelContext.setMessageHistoryFactory(micrometerMessageHistoryFactory); MicrometerExchangeEventNotifier micrometerExchangeEventNotifier = new MicrometerExchangeEventNotifier(); micrometerExchangeEventNotifier.setMeterRegistry(registry); camelContext.getManagementStrategy().addEventNotifier(micrometerExchangeEventNotifier); MicrometerRouteEventNotifier micrometerRouteEventNotifier = new MicrometerRouteEventNotifier(); micrometerRouteEventNotifier.setMeterRegistry(registry); camelContext.getManagementStrategy().addEventNotifier(micrometerRouteEventNotifier); } @Override public void afterApplicationStart(CamelContext camelContext) { } }; }", "mvn -Popenshift oc:deploy", "get pods -n myapp NAME READY STATUS RESTARTS AGE camel-example-spring-boot-xml-2-deploy 0/1 Completed 0 13m camel-example-spring-boot-xml-2-x78rk 1/1 Running 0 13m camel-example-spring-boot-xml-s2i-2-build 0/1 Completed 0 14m", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: csb-demo-monitor name: csb-demo-monitor spec: endpoints: - interval: 30s port: http scheme: http path: /actuator/prometheus selector: matchLabels: app: camel-example-spring-boot-xml", "apply -f servicemonitor.yml servicemonitor.monitoring.coreos.com/csb-demo-monitor \"myapp\" created", "get servicemonitor NAME AGE csb-demo-monitor 9m17s", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd \"> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:a\"/> <choice> <when> <xpath>USDfoo = 'bar'</xpath> <to uri=\"direct:b\"/> </when> <when> <xpath>USDfoo = 'cheese'</xpath> <to uri=\"direct:c\"/> </when> <otherwise> <to uri=\"direct:d\"/> </otherwise> </choice> </route> </camelContext> </beans>", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <routeBuilder ref=\"myBuilder\"/> </camelContext> <bean id=\"myBuilder\" class=\"org.apache.camel.spring.example.test1.MyRouteBuilder\"/>", "import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.ImportResource; /** * A Configuration class that import the Spring XML resource */ @Configuration // load the spring xml file from classpath @ImportResource(\"classpath:camel-context.xml\") public class CamelSpringXMLConfiguration { }", "<camelContext id=\"camel-A\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"seda:start\"/> <to uri=\"mock:result\"/> </route> </camelContext>", "<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> </camelContext> <bean id=\"jmsConnectionFactory\" class=\"org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory\"> <property name=\"brokerURL\" value=\"tcp:someserver:61616\"/> </bean> <bean id=\"jms\" class=\"org.apache.camel.component.jms.JmsComponent\"> <property name=\"connectionFactory\"> <bean class=\"org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory\"> <property name=\"brokerURL\" value=\"tcp:someserver:61616\"/> </bean> </property> </bean>", "<camelContext> <packageScan> <package>com.foo</package> <excludes>**.*Excluded*</excludes> <includes>**.*</includes> </packageScan> </camelContext>", "<camelContext> <packageScan> <package>com.foo</package> <excludes>**.*Special*</excludes> </packageScan> </camelContext>", "<!-- enable Spring @Component scan --> <context:component-scan base-package=\"org.apache.camel.spring.issues.contextscan\"/> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <!-- and then let Camel use those @Component scanned route builders --> <contextScan/> </camelContext>", "@Component public class MyRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"direct:start\") .to(\"mock:result\"); } }", "<routes xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"timer:tick\"/> <setBody> <constant>Hello Camel K!</constant> </setBody> <to uri=\"log:info\"/> </route> </routes>", "kamel run my-route.xml", "camel run my-route.xml", "<camel> <bean name=\"beanFromProps\" type=\"com.acme.MyBean\"> <properties> <property key=\"field1\" value=\"f1_p\" /> <property key=\"field2\" value=\"f2_p\" /> <property key=\"nested.field1\" value=\"nf1_p\" /> <property key=\"nested.field2\" value=\"nf2_p\" /> </properties> </bean> </camel>", "<camel> <beans xmlns=\"http://www.springframework.org/schema/beans\"> <bean id=\"messageString\" class=\"java.lang.String\"> <constructor-arg index=\"0\" value=\"Hello\"/> </bean> <bean id=\"greeter\" class=\"org.apache.camel.main.app.Greeter\"> <description>Spring Bean</description> <property name=\"message\"> <bean class=\"org.apache.camel.main.app.GreeterMessage\"> <property name=\"msg\" ref=\"messageString\"/> </bean> </property> </bean> </beans> <route id=\"my-route\"> <from uri=\"direct:start\"/> <bean ref=\"greeter\"/> <to uri=\"mock:finish\"/> </route> </camel>", "<camel> <bean name=\"beanFromProps\" type=\"com.acme.MyBean\"> <constructors> <constructor index=\"0\" value=\"true\"/> <constructor index=\"1\" value=\"Hello World\"/> </constructors> <!-- and you can still have properties --> <properties> <property key=\"field1\" value=\"f1_p\" /> <property key=\"field2\" value=\"f2_p\" /> <property key=\"nested.field1\" value=\"nf1_p\" /> <property key=\"nested.field2\" value=\"nf2_p\" /> </properties> </bean> </camel>", "<bean name=\"beanFromProps\" type=\"com.acme.MyBean(true, 'Hello World')\"> <properties> <property key=\"field1\" value=\"f1_p\" /> <property key=\"field2\" value=\"f2_p\" /> <property key=\"nested.field1\" value=\"nf1_p\" /> <property key=\"nested.field2\" value=\"nf2_p\" /> </properties> </bean>", "<bean name=\"myBean\" type=\"com.acme.MyBean\" factoryMethod=\"createMyBean\"> <constructors> <constructor index=\"0\" value=\"true\"/> <constructor index=\"1\" value=\"Hello World\"/> </constructors> </bean>", "public class MyBean { public static MyBean createMyBean(boolean important, String message) { MyBean answer = // create and configure the bean return answer; } }", "<bean name=\"myBean\" type=\"com.acme.MyBean\" builderClass=\"com.acme.MyBeanBuilder\" builderMethod=\"createMyBean\"> <properties> <property key=\"id\" value=\"123\"/> <property key=\"name\" value=\"Acme\"/> </constructors> </bean>", "<bean name=\"myBean\" type=\"com.acme.MyBean\" factoryBean=\"com.acme.MyHelper\" factoryMethod=\"createMyBean\"> <constructors> <constructor index=\"0\" value=\"true\"/> <constructor index=\"1\" value=\"Hello World\"/> </constructors> </bean>", "public class MyHelper { public static MyBean createMyBean(boolean important, String message) { MyBean answer = // create and configure the bean return answer; } }", "<bean name=\"myBean\" type=\"com.acme.MyBean\" scriptLanguage=\"groovy\"> <script> // some groovy script here to create the bean bean = return bean </script> </bean>", "public class MyBean { public void initMe() { // do init work here } public void destroyMe() { // do cleanup work here } }", "<bean name=\"myBean\" type=\"com.acme.MyBean\" initMethod=\"initMe\" destroyMethod=\"destroyMe\"> <constructors> <constructor index=\"0\" value=\"true\"/> <constructor index=\"1\" value=\"Hello World\"/> </constructors> </bean>", "<camel xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://camel.apache.org/schema/spring\" xsi:schemaLocation=\" http://camel.apache.org/schema/spring https://camel.apache.org/schema/spring/camel-spring.xsd\"> <rest id=\"rest\"> <post id=\"post\" path=\"start\"> <to uri=\"direct:start\"/> </post> </rest> <route> <from uri=\"direct:start\"/> <to uri=\"amqp:queue:Test.Broker.StreamMessage?jmsMessageType=Stream&amp;disableReplyTo=true\"/> </route> </camel>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html-single/getting_started_with_red_hat_build_of_apache_camel_for_spring_boot/index
Chapter 3. Login Modules Without External Identity Store
Chapter 3. Login Modules Without External Identity Store 3.1. Identity Login Module Short name : Identity Full name : org.jboss.security.auth.spi.IdentityLoginModule Parent : AbstractServer Login Module Identity login module is a simple login module that associates a hard-coded user name to any subject authenticated against the module. It creates a SimplePrincipal instance using the name specified by the principal option. This login module is useful when a fixed identity is required to be provided to a service. This can also be used in development environments for testing the security associated with a given principal and associated roles. Table 3.1. Identity Login Module Options Option Type Default Description principal String guest The name to use for the principal. roles comma-separated list of Strings none A comma-delimited list of roles which will be assigned to the subject. 3.2. UsersRoles Login Module Short name : UsersRoles Full name : org.jboss.security.auth.spi.UsersRolesLoginModule Parent : UsernamePassword Login Module UsersRoles login module is a simple login module that supports multiple users and user roles loaded from Java properties files. The primary purpose of this login module is to easily test the security settings of multiple users and roles using properties files deployed with the application. Table 3.2. UsersRoles Login Module Options Option Type Default Description usersProperties Path to a file or resource. users.properties The file or resource which contains the user-to-password mappings. The format of the file is username=password rolesProperties Path to a file or resource. roles.properties The file or resource which contains the user-to-role mappings. The format of the file is username=role1,role2,role3 defaultUsersProperties String defaultUsers.properties The name of the properties resource containing the username-to-password mappings that will be used as the default properties passed to the usersProperties properties. defaultRolesProperties String defaultRoles.properties The name of the properties resource containing the username-to-roles mappings that will be used as the default properties passed to the usersProperties properties. roleGroupSeperator String . The character used to separate the role group name from the user name, for example jduke.CallerPrincipal=... . 3.3. PropertiesUsers Login Module Short name : PropertiesUsers Full name : org.jboss.security.auth.spi.PropertiesUsersLoginModule Parent : UsersRoles Login Module The PropertiesUsers login module that uses a properties file to store user names and passwords for authentication. No authorization, role mapping, is provided. This module is only appropriate for testing. 3.4. SimpleUsers Login Module Short name : SimpleUsers Full name : org.jboss.security.auth.spi.SimpleUsersLoginModule Parent : PropertiesUsers Login Module The SimpleUsers login module that stores the user name and clear-text password using module-option . The name and value attributes of the module-option specifies a user name and password. It is included for testing only, and is not appropriate for a production environment. 3.5. SecureIdentity Login Module Short name : SecureIdentity Full name : org.picketbox.datasource.security.SecureIdentityLoginModule Parent : AbstractPasswordCredential Login Module The SecurityIdentity login module is a module that is provided for legacy purposes. It allows users to encrypt a password and then use the encrypted password with a static principal. If an application uses SecureIdentity , consider using a password vault mechanism instead. Table 3.3. SecureIdentity Login Module Options Option Type Default Description username String none The user name for authentication. password encrypted String "" The password to use for authentication. To encrypt the password, use the module directly at the command line, for example java org.picketbox.datasource.security.SecureIdentityLoginModule password_to_encrypt , and paste the result of this command into the module option's value field. The default value is an empty String. managedConnectionFactoryName Jakarta Connectors resource none The name of the Jakarta Connectors connection factory for your datasource. 3.6. ConfiguredIdentity Login Module Short name : ConfiguredIdentity Full name : org.picketbox.datasource.security.ConfiguredIdentityLoginModule Parent : AbstractPasswordCredential Login Module The ConfiguredIdentity login module associates the principal specified in the module options with any subject authenticated against the module. The type of Principal class used is org.jboss.security.SimplePrincipal . Table 3.4. ConfiguredIdentity Login Module Options Option Type Default Description username String none The user name for authentication. password encrypted String "" The password to use for authentication, which can be encrypted via the vault mechanism. The default value is an empty String. principal Name of a principal none The principal which will be associated with any subject authenticated against the module. 3.7. Simple Login Module Short name : Simple Full name : org.jboss.security.auth.spi.SimpleServerLoginModule Parent : UsernamePassword Login Module The Simple login module is a module for quick setup of security for testing purposes. It implements the following simple algorithm: If the password is null, authenticate the user and assign an identity of guest and a role of guest . Otherwise, if the password is equal to the user, assign an identity equal to the username and both user and guest roles. Otherwise, authentication fails. The Simple login module has no options. 3.8. Disabled Login Module Short name : Disabled Full name : org.jboss.security.auth.spi.DisabledLoginModule A login module that always fails authentication. It is to be used for a security domain that needs to be disabled, for instance when we do not want JAAS to fall back to using the other security domain. Table 3.5. Disabled Login Module Options Option Type Default Description jboss.security.security_domain String Name of security domain to display in error message. 3.9. Anon Login Module Short name : Anon Full name : org.jboss.security.auth.spi.AnonLoginModule Parent : UsernamePassword Login Module A simple login module that allows for the specification of the identity of unauthenticated users via the unauthenticatedIdentity property. This login module has no additional options beyond its inherited options from UsernamePassword Login Module . 3.10. RunAs Login Module Short name : RunAs Full name : org.jboss.security.auth.spi.RunAsLoginModule The RunAs login module is a helper module that pushes a run as role onto the stack for the duration of the login phase of authentication, then pops the run as role from the stack in either the commit or abort phase. The purpose of this login module is to provide a role for other login modules that must access secured resources in order to perform their authentication, for example, a login module that accesses a secured Jakarta Enterprise Beans. The RunAs login module must be configured ahead of the login modules that require a run as role established. Table 3.6. RunAs Login Module Options Option Type Default Description roleName role name nobody The name of the role to use as the run as role during the login phase. principalName principal name nobody Name of the principal to use as the run as principal during login phase. If not specified a default of nobody is used. principalClass A fully qualified classname. org.jboss.security.SimplePrincipal A Principal implementation class which contains a constructor that takes String arguments for the principal name. 3.11. RoleMapping Login Module Short name : RoleMapping Full name : org.jboss.security.auth.spi.RoleMappingLoginModule Parent : AbstractServer Login Module The RoleMapping login module is a login module that supports mapping roles, that are the end result of the authentication process, to one or more declarative roles. For example, if the authentication process has determined that the user John has the roles ldapAdmin and testAdmin , and the declarative role defined in the web.xml or ejb-jar.xml file for access is admin , then this login module maps the admin roles to John . The RoleMapping login module must be defined as an optional module to a login module configuration as it alters mapping of the previously mapped roles. Table 3.7. RoleMapping Login Module Options Option Type Default Description rolesProperties The fully qualified file path and name of a properties file or resource none The fully qualified file path and name of a properties file or resource which maps roles to replacement roles. The format is original_role=role1,role2,role3 . replaceRole true or false false Whether to add to the current roles, or replace the current roles with the mapped ones. Replaces if set to true. 3.12. RealmDirect Login Module Short name : RealmDirect Full name : org.jboss.as.security.RealmDirectLoginModule Parent : UsernamePassword Login Module The RealmDirect login module allows for the use of an existing security realm to be used in making authentication and authorization decisions. When configured, this module will look up identity information using the referenced realm for making authentication decisions and delegate to that security realm for authorization decisions. For example, the pre-configured other security domain that ships with JBoss EAP has a RealmDirect login module. If no realm is referenced in this module, the ApplicationRealm security realm is used by default. Table 3.8. RealmDirect Login Module Options Option Type Default Description realm String ApplicationRealm Name of the desired realm. Note The RealmDirect login module uses realm only for legacy security and not for Elytron. 3.13. RealmUsersRoles Login Module Short name : RealmUsersRoles Full name : org.jboss.as.security.RealmUsersRolesLoginModule Parent : UsersRoles Login Module A login module which can authenticate users from given realm. Used for remoting calls. Use of RealmDirect is recommended instead of RealmUsersRoles . Table 3.9. RealmUsersRoles Login Module Options Option Type Default Description realm String ApplicationRealm Name of the desired realm. hashAlgorithm String REALM Static value set by login module for option from inherited UsernamePassword Login Module . hashStorePassword String false Static value set by login module for option from inherited UsernamePassword Login Module . Note The RealmUsersRoles login module uses realm only for legacy security and not for Elytron.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/login_module_reference/login_module_without_external_identity_story
Chapter 14. Security
Chapter 14. Security LUKS-encrypted removable storage devices can be now automatically unlocked using NBDE With this update, the clevis package and the clevis_udisks2 subpackage enable users to bind removable volumes to a Network-Bound Disk Encryption (NBDE) policy. To automatically unlock a LUKS-encrypted removable storage device, such as a USB drive, use the clevis luks bind and clevis luks unlock commands. (BZ#1475408) new package: clevis-systemd This update of the Clevis pluggable framework introduces the clevis-systemd subpackage, which enables administrators to set automated unlocking of LUKS-encrypted non-root volumes at boot time. (BZ#1475406) OpenSCAP can be now integrated into Ansible workflows With this update, the OpenSCAP scanner can generate remediation scripts in the form of Ansible Playbooks, either based on profiles or based on scan results. Playbooks based on SCAP Security Guide Profiles contain fixes for all rules, and playbooks based on scan results contain only fixes for rules that fail during an evaluation. The user can also generate a playbook from a tailored Profile, or customize it directly by editing the values in the playbook. Tags, such as Rule ID, strategy, complexity, disruption, or references, used as metadata for tasks in playbooks serve to filter, which tasks to apply. (BZ# 1404429 ) SECCOMP_FILTER_FLAG_TSYNC enables synchronization of calling process threads This update introduces the SECCOMP_FILTER_FLAG_TSYNC flag. When adding a new filter, this flag synchronizes all other threads of the calling process to the same seccomp filter tree. See the seccomp(2) man page for more information. Note that if an application installs multiple libseccomp or seccomp-bpf filters, the seccomp() syscall should be added to the list of allowed system calls. (BZ#1458278) nss rebased to version 3.34 The nss packages have been upgraded to upstream version 3.34, which provides a number of bug fixes and enhancements over the version. Notable changes include: TLS compression is no longer supported. The TLS server code now supports session ticket without an RSA key. Certificates can be specified using a PKCS#11 URI. The RSA-PSS cryptographic signature scheme is now allowed for signing and verification of certificate signatures. (BZ#1457789) SSLv3 disabled in mod_ssl To improve the security of SSL/TLS connections, the default configuration of the httpd mod_ssl module has been changed to disable support for the SSLv3 protocol, and to restrict the use of certain cryptographic cipher suites. This change will affect only fresh installations of the mod_ssl package, so existing users should manually change the SSL configuration as required. Any SSL clients attempting to establish connections using SSLv3 , or using a cipher suite based on DES or RC4 , will be denied in the new default configuration. To allow such insecure connections, modify the SSLProtocol and SSLCipherSuite directives in the /etc/httpd/conf.d/ssl.conf file. (BZ# 1274890 ) Libreswan now supports split-DNS configuration for IKEv2 This update of the libreswan packages introduces support for split-DNS configuration for the Internet Key Exchange version 2 (IKEv2) protocol through the leftmodecfgdns= and leftcfgdomains= options. This enables the user to reconfigure a locally running DNS server with DNS forwarding for specific private domains. (BZ# 1300763 ) libreswan now supports AES-GMAC for ESP With this update, support for Advanced Encryption Standard (AES) Galois Message Authentication Code (GMAC) within IPsec Encapsulating Security Payload (ESP) through the phase2alg=null_auth_aes_gmac option has been added to the libreswan packages. (BZ#1475434) openssl-ibmca rebased to 1.4.0 The openssl-ibmca packages have been upgraded to upstream version 1.4.0, which provides a number of bug fixes and enhancements over the version: Added Advanced Encryption Standard Galois/Counter Mode (AES-GCM) support. Fixes for OpenSSL operating in FIPS mode incorporated. (BZ#1456516) opencryptoki rebased to 3.7.0 The opencryptoki packages have been upgraded to upstream version 3.7.0, which provides a number of bug fixes and enhancements over the version: Upgraded the license to Common Public License Version 1.0 (CPL). Added ECDSA with SHA-2 support for Enterprise PKCS #11 (EP11) and Common Cryptographic Architecture (CCA). Improved performance by moving from mutex locks to Transactional Memory (TM). (BZ#1456520) atomic scan with configuration_compliance enables creating security-compliant container images at build time The rhel7/openscap container image now provides the configuration_compliance scan type. When used as an argument for the atomic scan command, this new scan type enables users to: scan Red Hat Enterprise Linux-based container images and containers against any profile provided by the SCAP Security Guide (SSG) remediate Red Hat Enterprise Linux-based container images to be compliant with any profile provided by the SSG generate an HTML report from a scan or a remediation. The remediation results in a container image with an altered configuration that is added as a new layer on top of the original container image. Note that the original container image remains unchanged and only a new layer is created on top of it. The remediation process builds a new container image that contains all the configuration improvements. The content of this layer is defined by the security policy of scanning. This also means that the remediated container image is no longer signed by Red Hat, which is expected, since it differs from the original container image by containing the remediated layer. (BZ# 1472499 ) tang-nagios enables Nagios to monitor Tang The tang-nagios subpackage provides the Nagios plugin for Tang . The plugin enables the Nagios program to monitor a Tang server. The subpackage is available in the Optional channel. See the tang-nagios(1) man page for more information. (BZ# 1478895 ) clevis now logs privileged operations With this update, the clevis-udisks2 subpackage logs all attempted key recoveries to the Audit log, and the privileged operations can be now tracked using the Linux Audit system. (BZ# 1478888 ) PK11_CreateManagedGenericObject() has been added to NSS to prevent memory leaks in applications The PK11_DestroyGenericObject() function does not destroy objects allocated by PK11_CreateGenericObject() properly, but some applications depend on a function for creating objects that persist after the use of the object. For this reason, the Network Security Services (NSS) libraries now include the PK11_CreateManagedGenericObject() function. If you create objects with PK11_CreateManagedGenericObject() , the PK11_DestroyGenericObject() function also properly destroys underlying associated objects. Applications, such as the curl utility, can now use PK11_CreateManagedGenericObject() to prevent memory leaks. (BZ# 1395803 ) OpenSSH now supports openssl-ibmca and openssl-ibmpkcs11 HSMs With this update, the OpenSSH suite enables hardware security modules (HSM) handled by the openssl-ibmca and openssl-ibmpkcs11 packages. Prior to this, the OpenSSH seccomp filter prevented these cards working with the OpenSSH privilege separation. The seccomp filter has been updated to allow system calls needed by the cryptographic cards on IBM Z. (BZ#1478035) cgroup_seclabel enables fine-grained access control on cgroups This update introduces the cgroup_seclabel policy capability that enables users to set labels on control group (cgroup) files. Prior to this addition, labeling of the cgroup file system was not possible, and to run the systemd service manager in a container, read and write permissions for any content on the cgroup file system had to be allowed. The cgroup_seclabel policy capability enables fine-grained access control on the cgroup file system. (BZ#1494179) The boot process can now unlock encrypted devices connected by network Previously, the boot process attempted to unlock block devices connected by network before starting network services. Because the network was not activated, it was not possible to connect and decrypt these devices. With this update, the remote-cryptsetup.target unit and other patches have been added to systemd packages. As a result, it is now possible to unlock encrypted block devices that are connected by network during system boot and to mount file systems on such block devices. To ensure correct ordering between services during system boot, you must mark the network device with the _netdev option in the /etc/crypttab configuration file. A common use case for this feature is together with network-bound disk encryption. For more information on network-bound disk encryption, see the following chapter in the Red Hat Enterprise Linux Security Guide: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/security_guide/sec-policy-based_decryption#sec-Using_Network-Bound_Disk_Encryption (BZ# 1384014 ) SELinux now supports InfiniBand object labeling This release introduces SELinux support for InfiniBand end port and P_Key labeling, including enhancements to the kernel, policy, and the semanage tool. To manage InfiniBand -related labels, use the following commands: semanage ibendport semanage ibpkey (BZ# 1471809 , BZ# 1464484 , BZ#1464478) libica rebased to 3.2.0 The libica packages have been upgraded to upstream version 3.2.0, which most notably adds support for the Enhanced SIMD instructions set. (BZ#1376836) SELinux now supports systemd No New Privileges This update introduces the nnp_nosuid_transition policy capability that enables SELinux domain transitions under No New Privileges (NNP) or nosuid if nnp_nosuid_transition is allowed between the old and new contexts. The selinux-policy packages now contain a policy for systemd services that use the NNP security feature. The following rule describes allowing this capability for a service: For example: The distribution policy now also contains the m4 macro interface, which can be used in SELinux security policies for services that use the init_nnp_daemon_domain() function. (BZ# 1480518 ) Libreswan rebased to version 3.23 The libreswan packages have been upgraded to upstream version 3.23, which provides a number of bug fixes, speed improvements, and enhancements over the version. Notable changes include: Support for the extended DNS Security Extensions (DNSSEC) suite through the dnssec-enable=yes|no , dnssec-rootkey-file= , and dnssec-anchors= options. Experimental support for Postquantum Preshared Keys (PPK) through the ppk=yes|no|insist option. Support for Signature Authentication (RFC 7427) for RSA-SHA. The new logip= option with the default value yes can be used to disable logging of incoming IP addresses. This is useful for large-scale service providers concerned for privacy. Unbound DNS server ipsecmod module support for Opportunistic IPsec using IPSECKEY records in DNS. Support for the Differentiated Services Code Point (DSCP) architecture through the decap-dscp=yes option. DSCP was formerly known as Terms Of Service (TOS). Support for disabling Path MTU Discovery (PMTUD) through the nopmtudisc=yes option. Support for the IDr (Identification - Responder) payload for improved multi-domain deployments. Resending IKE packets on extremely busy servers that return the EAGAIN error message. Various improvements to the updown scripts for customizations. Updated preferences of crypto algorithms as per RFC 8221 and RFC 8247. Added the %none and /dev/null values to the leftupdown= option for disabling the updown script. Improved support for rekeying using the CREATE_CHILD_SA exchange. IKEv1 XAUTH thread race conditions resolved. Significant performance increase due to optimized pthread locking. See the ipsec.conf man page for more information. (BZ# 1457904 ) libreswan now supports IKEv2 MOBIKE This update introduces support for the IKEv2 Mobility and Multihoming (MOBIKE) protocol (RFC 4555) using the XFRM_MIGRATE mechanism through the mobike=yes|no option. MOBIKE enables seamless switching of networks, for example, Wi-Fi, LTE, and so on, without disturbing the IPsec tunnel. (BZ# 1471763 ) scap-workbench rebased to version 1.1.6 The scap-workbench packages have been upgraded to version 1.1.6, which provides a number of bug fixes and enhancements over the version. Notable changes are: Added support for generating Bash and Ansible remediation roles from profiles and for scanning results. The generated remediations can be saved to a file for later use. Added support for opening tailoring files directly from the command line. Fixed a short integer overflow when using SSH port numbers higher than 32,768. (BZ# 1479036 ) OpenSCAP is now able to generate results for DISA STIG Viewer The OpenSCAP suite is now able to generate results in the format compatible with the DISA STIG Viewer tool. This enables the user to scan a local system for Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) compliance and open results in DISA STIG Viewer . (BZ# 1505517 ) selinux-policy no longer contains permissive domains As a security hardening measure, the SELinux policy now does not set the following domains to permissive mode by default: blkmapd_t hsqldb_t ipmievd_t sanlk_resetd_t systemd_hwdb_t targetd_t The default mode for these domains is now set to enforcing. (BZ# 1494172 ) audit rebased to version 2.8.1 The audit packages have been upgraded to upstream version 2.8.1, which provides a number of bug fixes and enhancements over the version. Notable changes are: Added support for ambient capability fields. The Audit daemon now works also on IPv6. Added the default port to the auditd.conf file. Fixed the auvirt tool to report Access Vector Cache (AVC) messages. (BZ# 1476406 ) OpenSC now supports the SCE7.0 144KDI CAC Alt. tokens This update adds support for the SCE7.0 144KDI Common Access Card (CAC) Alternate tokens. These new cards were not compliant with the U.S. Department of Defense (DoD) Implementation Guide for CAC PIV End-Point specification, and the OpenSC driver has been updated to reflect the updated specification. (BZ# 1473418 )
[ "allow source_domain target_type:process2 { nnp_transition nosuid_transition };", "allow init_t fprintd_t:process2 { nnp_transition nosuid_transition };" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/new_features_security
Appendix D. Removing the Red Hat Virtualization Manager
Appendix D. Removing the Red Hat Virtualization Manager You can use the engine-cleanup command to remove specific components or all components of the Red Hat Virtualization Manager. Note A backup of the Manager database and a compressed archive of the PKI keys and configuration are always automatically created. These files are saved under /var/lib/ovirt-engine/backups/ , and include the date and engine- and engine-pki- in their file names respectively. Procedure Run the following command on the Manager machine: You are prompted whether to remove all Red Hat Virtualization Manager components: Type Yes and press Enter to remove all components: Type No and press Enter to select the components to remove. You can select whether to retain or remove each component individually: You are given another opportunity to change your mind and cancel the removal of the Red Hat Virtualization Manager. If you choose to proceed, the ovirt-engine service is stopped, and your environment's configuration is removed in accordance with the options you selected. Remove the Red Hat Virtualization packages:
[ "engine-cleanup", "Do you want to remove all components? (Yes, No) [Yes]:", "Do you want to remove Engine database content? All data will be lost (Yes, No) [No]: Do you want to remove PKI keys? (Yes, No) [No]: Do you want to remove PKI configuration? (Yes, No) [No]: Do you want to remove Apache SSL configuration? (Yes, No) [No]:", "During execution engine service will be stopped (OK, Cancel) [OK]: ovirt-engine is about to be removed, data will be lost (OK, Cancel) [Cancel]:OK", "yum remove rhvm* vdsm-bootstrap" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/removing_red_hat_virtualization_manager_she_cli_deploy
Windows Container Support for OpenShift
Windows Container Support for OpenShift OpenShift Container Platform 4.12 Red Hat OpenShift for Windows Containers Guide Red Hat OpenShift Documentation Team
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: \"true\" 2", "oc create -f <file-name>.yaml", "oc create -f wmco-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator", "oc create -f <file-name>.yaml", "oc create -f wmco-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: \"stable\" 1 installPlanApproval: \"Automatic\" 2 name: \"windows-machine-config-operator\" source: \"redhat-operators\" 3 sourceNamespace: \"openshift-marketplace\" 4", "oc create -f <file-name>.yaml", "oc create -f wmco-sub.yaml", "oc get csv -n openshift-windows-machine-config-operator", "NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded", "oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> -n openshift-windows-machine-config-operator 1", "oc adm cordon <node1>", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force", "error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction", "C:\\> powershell", "C:\\> Restart-Computer -Force", "C:\\> route add 169.254.169.254 mask 255.255.255.0 <gateway_ip>", "C:\\> ipconfig | findstr /C:\"Default Gateway\"", "oc adm uncordon <node1>", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8", "aws ec2 describe-images --region <aws_region_name> --filters \"Name=name,Values=Windows_Server-2022*English*Core*Base*\" \"Name=is-public,Values=true\" --query \"reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}\" --output table", "aws ec2 describe-images --region <aws_region_name> --filters \"Name=name,Values=Windows_Server-2019*English*Core*Base*\" \"Name=is-public,Values=true\" --query \"reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}\" --output table", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: ami: id: <windows_container_ami> 9 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 10 instanceType: m5a.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: windows-user-data 16 namespace: openshift-machine-api", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 9 offer: WindowsServer publisher: MicrosoftWindowsServer resourceID: \"\" sku: 2019-Datacenter-with-Containers version: latest kind: AzureMachineProviderSpec location: <location> 10 managedIdentity: <infrastructure_id>-identity 11 networkResourceGroup: <infrastructure_id>-rg 12 osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Windows publicIP: false resourceGroup: <infrastructure_id>-rg 13 subnet: <infrastructure_id>-worker-subnet userDataSecret: name: windows-user-data 14 namespace: openshift-machine-api vmSize: Standard_D2s_v3 vnet: <infrastructure_id>-vnet 15 zone: \"<zone>\" 16", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "exclude-nics=", "C:\\> ipconfig", "PS C:\\> Get-Service -Name VMTools | Select Status, StartType", "PS C:\\> New-NetFirewallRule -DisplayName \"ContainerLogsPort\" -LocalPort 10250 -Enabled True -Direction Inbound -Protocol TCP -Action Allow -EdgeTraversalPolicy Allow", "C:\\> C:\\Windows\\System32\\Sysprep\\sysprep.exe /generalize /oobe /shutdown /unattend:<path_to_unattend.xml> 1", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <unattend xmlns=\"urn:schemas-microsoft-com:unattend\"> <settings pass=\"specialize\"> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-International-Core\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Security-SPP-UX\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <SkipAutoActivation>true</SkipAutoActivation> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-SQMApi\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <CEIPEnabled>0</CEIPEnabled> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Shell-Setup\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <ComputerName>winhost</ComputerName> 1 </component> </settings> <settings pass=\"oobeSystem\"> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Shell-Setup\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <AutoLogon> <Enabled>false</Enabled> 2 </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideLocalAccountScreen>true</HideLocalAccountScreen> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Work</NetworkLocation> <ProtectYourPC>1</ProtectYourPC> <SkipMachineOOBE>true</SkipMachineOOBE> <SkipUserOOBE>true</SkipUserOOBE> </OOBE> <RegisteredOrganization>Organization</RegisteredOrganization> <RegisteredOwner>Owner</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <TimeZone>Eastern Standard Time</TimeZone> <UserAccounts> <AdministratorPassword> <Value>MyPassword</Value> 3 <PlainText>true</PlainText> </AdministratorPassword> </UserAccounts> </component> </settings> </unattend>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 128 9 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <windows_vm_template_name> 11 userDataSecret: name: windows-user-data 12 workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcePool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone_suffix> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <windows_server_image> 9 sizeGb: 128 type: pd-ssd kind: GCPMachineProviderSpec machineType: n1-standard-4 networkInterfaces: - network: <infrastructure_id>-network 10 subnetwork: <infrastructure_id>-worker-subnet projectID: <project_id> 11 region: <region> 12 serviceAccounts: - email: <infrastructure_id>-w@<project_id>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: windows-user-data 13 zone: <zone> 14", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: windows2019 1 handler: 'runhcs-wcow-process' scheduling: nodeSelector: 2 kubernetes.io/os: 'windows' kubernetes.io/arch: 'amd64' node.kubernetes.io/windows-build: '10.0.17763' tolerations: 3 - effect: NoSchedule key: os operator: Equal value: \"windows\" - effect: NoSchedule key: os operator: Equal value: \"Windows\"", "oc create -f <file-name>.yaml", "oc create -f runtime-class.yaml", "apiVersion: v1 kind: Pod metadata: name: my-windows-pod spec: runtimeClassName: windows2019 1", "apiVersion: v1 kind: Service metadata: name: win-webserver labels: app: win-webserver spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 selector: app: win-webserver type: LoadBalancer", "apiVersion: apps/v1 kind: Deployment metadata: labels: app: win-webserver name: win-webserver spec: selector: matchLabels: app: win-webserver replicas: 1 template: metadata: labels: app: win-webserver name: win-webserver spec: containers: - name: windowswebserver image: mcr.microsoft.com/windows/servercore:ltsc2019 1 imagePullPolicy: IfNotPresent command: - powershell.exe 2 - -command - USDlistener = New-Object System.Net.HttpListener; USDlistener.Prefixes.Add('http://*:80/'); USDlistener.Start();Write-Host('Listening at http://*:80/'); while (USDlistener.IsListening) { USDcontext = USDlistener.GetContext(); USDresponse = USDcontext.Response; USDcontent='<html><body><H1>Red Hat OpenShift + Windows Container Workloads</H1></body></html>'; USDbuffer = [System.Text.Encoding]::UTF8.GetBytes(USDcontent); USDresponse.ContentLength64 = USDbuffer.Length; USDresponse.OutputStream.Write(USDbuffer, 0, USDbuffer.Length); USDresponse.Close(); }; securityContext: runAsNonRoot: false windowsOptions: runAsUserName: \"ContainerAdministrator\" os: name: \"windows\" runtimeClassName: windows2019 3", "oc get machinesets -n openshift-machine-api", "oc get machine -n openshift-machine-api", "oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines", "kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: 10.1.42.1: |- 1 username=Administrator 2 instance.example.com: |- username=core", "kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: instance.example.com: |- username=core", "oc get machine -n openshift-machine-api", "oc delete machine <machine> -n openshift-machine-api", "oc delete --all pods --namespace=openshift-windows-machine-config-operator", "oc get pods --namespace openshift-windows-machine-config-operator", "oc delete namespace openshift-windows-machine-config-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/windows_container_support_for_openshift/index
Chapter 60. Spring Batch
Chapter 60. Spring Batch Since Camel 2.10 Only producer is supported The Spring Batch component and support classes provide integration bridge between Camel and Spring Batch infrastructure. Maven users must add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring-batch</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 60.1. URI format jobName represents the name of the Spring Batch job located in the Camel registry. If a JobRegistry is provided is used to locate the job. This component is only used to define producer endpoints, that means you cannot use the Spring Batch component in a from() statement. 60.2. Configuring Options Camel components are configured on two levels: Component level Endpoint level 60.2.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 60.2.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 60.3. Component Options The Spring Batch component supports 4 options, which are listed below. Name Description Default Type jobLauncher (producer) Explicitly specifies a JobLauncher to be used. JobLauncher jobRegistry (producer) Explicitly specifies a JobRegistry to be used. JobRegistry lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 60.4. Endpoint Options The Spring Batch endpoint is configured using URI syntax: Following are the path and query parameters: 60.4.1. Path Parameters (1 parameters) Name Description Default Type jobName (producer) Required The name of the Spring Batch job located in the registry. String 60.4.2. Query Parameters (4 parameters) Name Description Default Type jobFromHeader (producer) Explicitly defines if the jobName should be taken from the headers instead of the URI. false boolean jobLauncher (producer) Explicitly specifies a JobLauncher to be used. JobLauncher jobRegistry (producer) Explicitly specifies a JobRegistry to be used. JobRegistry lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 60.5. Usage When Spring Batch component receives the message, it triggers the job execution. The job is executed using the org.springframework.batch.core.launch.JobLaucher instance resolved according to the following algorithm. if JobLauncher is manually set on the component, then use it. if jobLauncherRef option is set on the component, then search Camel Registry for the JobLauncher with the given name. if there is JobLauncher registered in the Camel Registry under jobLauncher name, then use it. if none of the steps above allow to resolve the JobLauncher and there is exactly one JobLauncher instance in the Camel Registry, then use it. All headers found in the message are passed to the JobLauncher as job parameters. String , Long , Double and java.util.Date values are copied to the org.springframework.batch.core.JobParametersBuilder and other data types are converted to Strings. 60.6. Examples Triggering the Spring Batch job execution: from("direct:startBatch").to("spring-batch:myJob"); Triggering the Spring Batch job execution with the JobLauncher set explicitly. from("direct:startBatch").to("spring-batch:myJob?jobLauncherRef=myJobLauncher"); A JobExecution instance returned by the JobLauncher is forwarded by the SpringBatchProducer as the output message. You can use the JobExecution instance to perform some operations using the Spring Batch API directly. from("direct:startBatch").to("spring-batch:myJob").to("mock:JobExecutions"); ... MockEndpoint mockEndpoint = ...; JobExecution jobExecution = mockEndpoint.getExchanges().get(0).getIn().getBody(JobExecution.class); BatchStatus currentJobStatus = jobExecution.getStatus(); 60.7. Support classes Apart from the component, Camel Spring Batch also provides support classes that you can use to hook into Spring Batch infrastructure. 60.7.1. CamelItemReader CamelItemReader can be used to read batch data directly from the Camel infrastructure. For example, the snippet below configures Spring Batch to read data from JMS queue: <bean id="camelReader" class="org.apache.camel.component.spring.batch.support.CamelItemReader"> <constructor-arg ref="consumerTemplate"/> <constructor-arg value="jms:dataQueue"/> </bean> <batch:job id="myJob"> <batch:step id="step"> <batch:tasklet> <batch:chunk reader="camelReader" writer="someWriter" commit-interval="100"/> </batch:tasklet> </batch:step> </batch:job> 60.7.2. CamelItemWriter CamelItemWriter has similar purpose as CamelItemReader , but it is dedicated to write chunk of the processed data. For example the snippet below configures Spring Batch to read data from JMS queue. <bean id="camelwriter" class="org.apache.camel.component.spring.batch.support.CamelItemWriter"> <constructor-arg ref="producerTemplate"/> <constructor-arg value="jms:dataQueue"/> </bean> <batch:job id="myJob"> <batch:step id="step"> <batch:tasklet> <batch:chunk reader="someReader" writer="camelwriter" commit-interval="100"/> </batch:tasklet> </batch:step> </batch:job> 60.7.3. CamelItemProcessor CamelItemProcessor is the implementation of Spring Batch org.springframework.batch.item.ItemProcessor interface. The latter implementation relays on Request Reply pattern to delegate the processing of the batch item to the Camel infrastructure. The item to process is sent to the Camel endpoint as the body of the message. For example the snippet below performs simple processing of the batch item using the Direct endpoint and the Simple expression language . <camel:camelContext> <camel:route> <camel:from uri="direct:processor"/> <camel:setExchangePattern pattern="InOut"/> <camel:setBody> <camel:simple>Processed USD{body}</camel:simple> </camel:setBody> </camel:route> </camel:camelContext> <bean id="camelProcessor" class="org.apache.camel.component.spring.batch.support.CamelItemProcessor"> <constructor-arg ref="producerTemplate"/> <constructor-arg value="direct:processor"/> </bean> <batch:job id="myJob"> <batch:step id="step"> <batch:tasklet> <batch:chunk reader="someReader" writer="someWriter" processor="camelProcessor" commit-interval="100"/> </batch:tasklet> </batch:step> </batch:job> 60.7.4. CamelJobExecutionListener CamelJobExecutionListener is the implementation of the org.springframework.batch.core.JobExecutionListener interface sending job execution events to the Camel endpoint. The org.springframework.batch.core.JobExecution instance produced by the Spring Batch is sent as a body of the message. To distinguish between before- and after-callbacks SPRING_BATCH_JOB_EVENT_TYPE header is set to the BEFORE or AFTER value. The example snippet below sends Spring Batch job execution events to the JMS queue. <bean id="camelJobExecutionListener" class="org.apache.camel.component.spring.batch.support.CamelJobExecutionListener"> <constructor-arg ref="producerTemplate"/> <constructor-arg value="jms:batchEventsBus"/> </bean> <batch:job id="myJob"> <batch:step id="step"> <batch:tasklet> <batch:chunk reader="someReader" writer="someWriter" commit-interval="100"/> </batch:tasklet> </batch:step> <batch:listeners> <batch:listener ref="camelJobExecutionListener"/> </batch:listeners> </batch:job> 60.8. Spring Boot Auto-Configuration When using spring-batch with Spring Boot, use the following Maven dependency to enable support for auto-configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-batch-starter</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> The component supports 5 options, which are listed below. Name Description Default Type camel.component.spring-batch.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.spring-batch.enabled Whether to enable auto-configuration of the spring-batch component. This is enabled by default. Boolean camel.component.spring-batch.job-launcher Explicitly specifies a JobLauncher to be used. The option is a org.springframework.batch.core.launch.JobLauncher type. JobLauncher camel.component.spring-batch.job-registry Explicitly specifies a JobRegistry to be used. The option is a org.springframework.batch.core.configuration.JobRegistry type. JobRegistry camel.component.spring-batch.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring-batch</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "spring-batch:jobName[?options]", "spring-batch:jobName", "from(\"direct:startBatch\").to(\"spring-batch:myJob\");", "from(\"direct:startBatch\").to(\"spring-batch:myJob?jobLauncherRef=myJobLauncher\");", "from(\"direct:startBatch\").to(\"spring-batch:myJob\").to(\"mock:JobExecutions\"); MockEndpoint mockEndpoint = ...; JobExecution jobExecution = mockEndpoint.getExchanges().get(0).getIn().getBody(JobExecution.class); BatchStatus currentJobStatus = jobExecution.getStatus();", "<bean id=\"camelReader\" class=\"org.apache.camel.component.spring.batch.support.CamelItemReader\"> <constructor-arg ref=\"consumerTemplate\"/> <constructor-arg value=\"jms:dataQueue\"/> </bean> <batch:job id=\"myJob\"> <batch:step id=\"step\"> <batch:tasklet> <batch:chunk reader=\"camelReader\" writer=\"someWriter\" commit-interval=\"100\"/> </batch:tasklet> </batch:step> </batch:job>", "<bean id=\"camelwriter\" class=\"org.apache.camel.component.spring.batch.support.CamelItemWriter\"> <constructor-arg ref=\"producerTemplate\"/> <constructor-arg value=\"jms:dataQueue\"/> </bean> <batch:job id=\"myJob\"> <batch:step id=\"step\"> <batch:tasklet> <batch:chunk reader=\"someReader\" writer=\"camelwriter\" commit-interval=\"100\"/> </batch:tasklet> </batch:step> </batch:job>", "<camel:camelContext> <camel:route> <camel:from uri=\"direct:processor\"/> <camel:setExchangePattern pattern=\"InOut\"/> <camel:setBody> <camel:simple>Processed USD{body}</camel:simple> </camel:setBody> </camel:route> </camel:camelContext> <bean id=\"camelProcessor\" class=\"org.apache.camel.component.spring.batch.support.CamelItemProcessor\"> <constructor-arg ref=\"producerTemplate\"/> <constructor-arg value=\"direct:processor\"/> </bean> <batch:job id=\"myJob\"> <batch:step id=\"step\"> <batch:tasklet> <batch:chunk reader=\"someReader\" writer=\"someWriter\" processor=\"camelProcessor\" commit-interval=\"100\"/> </batch:tasklet> </batch:step> </batch:job>", "<bean id=\"camelJobExecutionListener\" class=\"org.apache.camel.component.spring.batch.support.CamelJobExecutionListener\"> <constructor-arg ref=\"producerTemplate\"/> <constructor-arg value=\"jms:batchEventsBus\"/> </bean> <batch:job id=\"myJob\"> <batch:step id=\"step\"> <batch:tasklet> <batch:chunk reader=\"someReader\" writer=\"someWriter\" commit-interval=\"100\"/> </batch:tasklet> </batch:step> <batch:listeners> <batch:listener ref=\"camelJobExecutionListener\"/> </batch:listeners> </batch:job>", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-batch-starter</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-spring-batch-component-starter
Chapter 10. Troubleshooting the DNS service
Chapter 10. Troubleshooting the DNS service By reviewing the Red Hat OpenStack Platform DNS service (designate) logs and using some simple commands, you can verify that the service is running properly. These actions are the first steps in troubleshooting the DNS service. The topics included in this section are: Section 10.1, "DNS service and BIND logs" Section 10.2, "Exporting the DNS service pool configuration" Section 10.3, "Listing available DNS service endpoints" 10.1. DNS service and BIND logs Reviewing the Red Hat OpenStack Platform DNS service (designate) logs, can be useful when troubleshooting issues. The DNS service logs are located in /var/log/containers/designate . There is one log for each of the component services: central.log designate-api.log mdns.log producer.log worker.log The logs for the BIND server that Red Hat supplies with the RHOSP DNS service are located in /var/log/containers/designate/designate-bind : central.log designate-api.log 10.2. Exporting the DNS service pool configuration You can use a copy of the DNS pool configuration to troubleshoot the Red Hat OpenStack Platform (RHOSP) DNS service (designate). Note In RHOSP 17.1, multiple pools are not supported. Procedure Log in to one of the Controller nodes and display the current running DNS service pool configuration to the console. Example In this example, the administrator accesses a Controller node, controller-0.ctlplane , through SSH as the tripleo-admin user and executes the designate-manage pool show_config command running in the designate_central container: Sample output If you want to export the current pool configuration to a file, use the designate-manage pool generate_file command. Example In this example, the administrator accesses a Controller node, controller-0.ctlplane , through SSH as the tripleo-admin user and executes the designate-manage pool generate_file --file <file_name> command running in the designate_central container. The DNS service exports the current pool configuration to a file that is specified by the --file option, in this example, ~/my_dns_service_config.yaml : Tip Use the podman cp command to copy files from a container to your local system. 10.3. Listing available DNS service endpoints Determining the Red Hat OpenStack Platform DNS service (designate) endpoint and its status, can be useful when troubleshooting issues. Procedure Source your credentials file. Example List the available RHOSP service endpoints: Sample output
[ "ssh [email protected] sudo podman exec designate_central designate-manage pool show_config", "Pool Configuration: ------------------- also_notifies: [] attributes: {} description: Default Pool id: 794ccc2c-d751-44fe-b57f-8894c9f5c842 name: default nameservers: - host: 192.0.2.111 port: 53 - host: 192.0.2.109 port: 53 - host: 192.0.2.131 port: 53 ns_records: - hostname: ns2.example.com. priority: 2 - hostname: ns1.example.com. priority: 1 - hostname: ns3.example.com. priority: 3 targets: - description: BIND9 Server 3 masters: - host: 192.0.2.137 port: 16002 - host: 192.0.2.137 port: 16001 - host: 192.0.2.137 port: 16000 options: host: 192.0.2.111 port: '53' rndc_config_file: /etc/designate/private/bind3.conf rndc_host: 192.0.2.111 rndc_port: '953' type: bind9 - description: BIND9 Server 2 masters: - host: 192.0.2.137 port: 16001 - host: 192.0.2.137 port: 16002 - host: 192.0.2.137 port: 16000 options: host: 192.0.2.131 port: '53' rndc_config_file: /etc/designate/private/bind2.conf rndc_host: 192.0.2.131 rndc_port: '953' type: bind9 - description: BIND9 Server 1 masters: - host: 192.0.2.137 port: 16002 - host: 192.0.2.137 port: 16001 - host: 192.0.2.137 port: 16000 options: host: 192.0.2.109 port: '53' rndc_config_file: /etc/designate/private/bind1.conf rndc_host: 192.0.2.109 rndc_port: '953' type: bind9", "ssh [email protected] sudo podman exec designate_central designate-manage pool generate_file --file ~/my_dns_service_config.yaml", "source ~/overcloudrc", "openstack endpoint list --service designate -c Interface -c URL", "+-----------+-------------------------+ | Interface | URL | +-----------+-------------------------+ | internal | http://172.17.1.44:9001 | | public | http://10.0.0.147:9001 | | admin | http://172.17.1.44:9001 | +-----------+-------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_dns_as_a_service/troubleshoot-dns-service_rhosp-dnsaas
Part II. Deprecated Functionality
Part II. Deprecated Functionality This part provides an overview of functionality that has been deprecated in Red Hat Enterprise Linux 7.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/part-red_hat_enterprise_linux-7.0_release_notes-deprecated_functionality
Chapter 10. Desktop and Graphics
Chapter 10. Desktop and Graphics Graphics Updates and New Hardware Support Graphics updates in Red Hat Enterprise Linux 6.5 include the following: Support for future Intel and AMD devices Spice improvements Improved multi monitor support and touch screen support Updated gdm Updates to the gdm application include fixes of password expiration messages, mutli-seat support and local interoperability problems. Upgraded Evolution The Evolution application has been upgraded to upstream version 2.32 to improve interoperability with Microsoft Exchange. This includes the new Exchange Web Service (EWS), improved meeting support and improved folder support. Rebased LibreOffice In Red Hat Enterprise Linux 6.5 release, LibreOffice has been upgraded to upstream version 4.0.4. Support for AMD GPUs Support for the latest AMD graphics processor units (GPUs) has been added to Red Hat Enterprise Linux 6.5 Alias Support in NetworkManager Alias support has been added to NetworkManager. However, users are strongly recommended to use the multiple or secondary IP feature instead.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_release_notes/bh-chap-desktop_and_graphics
Chapter 10. Run Red Hat JBoss Data Grid in Library Mode (Single-Node Setup)
Chapter 10. Run Red Hat JBoss Data Grid in Library Mode (Single-Node Setup) 10.1. Create a Main Method in the Quickstart Class Create a new Quickstart class by following the outlined steps: Prerequisites These quickstarts use the Infinispan quickstarts located at https://github.com/infinispan/infinispan-quickstart . The following procedure uses the infinispan-quickstart/embedded-cache quickstart. Procedure 10.1. Create a Main Method in the Quickstart Class Create the Quickstart.java File Create a file called Quickstart.java at your project's location. Add the Quickstart Class Add the following class and method to the Quickstart.java file: Copy Dependencies and Compile Java Classes Use the following command to copy all project dependencies to a directory and compile the Java classes from your project: Run the Main Method Use the following command to run the main method: Report a bug
[ "package com.mycompany.app; import org.infinispan.manager.DefaultCacheManager import org.infinispan.Cache public class Quickstart { public static void main(String args[]) throws Exception { Cache<Object, Object> cache = new DefaultCacheManager().getCache(); } }", "mvn clean compile dependency:copy-dependencies -DstripVersion", "java -cp target/classes/:target/dependency/* com.mycompany.app.Quickstart" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/chap-Run_Red_Hat_JBoss_Data_Grid_in_Library_Mode_Single-Node_Setup
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/automation_services_catalog_product_support_matrix/making-open-source-more-inclusive
Chapter 17. DBService
Chapter 17. DBService 17.1. GetExportCapabilities GET /v1/db/exportcaps 17.1.1. Description 17.1.2. Parameters 17.1.3. Return Type V1GetDBExportCapabilitiesResponse 17.1.4. Content Type application/json 17.1.5. Responses Table 17.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetDBExportCapabilitiesResponse 0 An unexpected error response. GooglerpcStatus 17.1.6. Samples 17.1.7. Common object reference 17.1.7.1. DBExportManifestEncodingType The encoding of the file data in the restore body, usually for compression purposes. Enum Values UNKNOWN UNCOMPREESSED DEFLATED 17.1.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 17.1.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 17.1.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 17.1.7.4. V1DBExportFormat DBExportFormat describes a format (= a collection of files) for the database export. Field Name Required Nullable Type Description Format formatName String files List of V1DBExportFormatFile 17.1.7.5. V1DBExportFormatFile Field Name Required Nullable Type Description Format name String optional Boolean 17.1.7.6. V1GetDBExportCapabilitiesResponse Field Name Required Nullable Type Description Format formats List of V1DBExportFormat supportedEncodings List of DBExportManifestEncodingType 17.2. InterruptRestoreProcess POST /v1/db/interruptrestore/{processId}/{attemptId} 17.2.1. Description 17.2.2. Parameters 17.2.2.1. Path Parameters Name Description Required Default Pattern processId X null attemptId X null 17.2.3. Return Type V1InterruptDBRestoreProcessResponse 17.2.4. Content Type application/json 17.2.5. Responses Table 17.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1InterruptDBRestoreProcessResponse 0 An unexpected error response. GooglerpcStatus 17.2.6. Samples 17.2.7. Common object reference 17.2.7.1. DBRestoreProcessStatusResumeInfo Field Name Required Nullable Type Description Format pos String int64 17.2.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 17.2.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 17.2.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 17.2.7.4. V1InterruptDBRestoreProcessResponse Field Name Required Nullable Type Description Format resumeInfo DBRestoreProcessStatusResumeInfo 17.3. GetActiveRestoreProcess GET /v1/db/restore 17.3.1. Description 17.3.2. Parameters 17.3.3. Return Type V1GetActiveDBRestoreProcessResponse 17.3.4. Content Type application/json 17.3.5. Responses Table 17.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetActiveDBRestoreProcessResponse 0 An unexpected error response. GooglerpcStatus 17.3.6. Samples 17.3.7. Common object reference 17.3.7.1. DBExportManifestEncodingType The encoding of the file data in the restore body, usually for compression purposes. Enum Values UNKNOWN UNCOMPREESSED DEFLATED 17.3.7.2. DBRestoreProcessStatusResumeInfo Field Name Required Nullable Type Description Format pos String int64 17.3.7.3. DBRestoreRequestHeaderLocalFileInfo LocalFileInfo provides information about the file on the local machine of the user initiating the restore process, in order to provide information to other users about ongoing restore processes. Field Name Required Nullable Type Description Format path String The full path of the file. bytesSize String The size of the file, in bytes. 0 if unknown. int64 17.3.7.4. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 17.3.7.5. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 17.3.7.5.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 17.3.7.6. V1DBExportManifest A DB export manifest describes the file contents of a restore request. To prevent data loss, a manifest is always interpreted as binding, i.e., the server must ensure that it will read and make use of every file listed in the manifest, otherwise it must reject the request. Field Name Required Nullable Type Description Format files List of V1DBExportManifestFile 17.3.7.7. V1DBExportManifestFile A single file in the restore body. Field Name Required Nullable Type Description Format name String The name of the file. This may or may not be a (relative) file path and up to the server to interpret. For databases exported as ZIP files, this is the path relative to the root of the archive. encoding DBExportManifestEncodingType UNKNOWN, UNCOMPREESSED, DEFLATED, encodedSize String int64 decodedSize String int64 decodedCrc32 Long The CRC32 (IEEE) checksum of the decoded(!) data. int64 17.3.7.8. V1DBRestoreProcessMetadata The metadata of an ongoing or completed restore process. This is the static metadata, which will not change (i.e., it is not a status). Field Name Required Nullable Type Description Format id String An ID identifying the restore process. Auto-assigned. header V1DBRestoreRequestHeader startTime Date The time at which the restore process was started. date-time initiatingUserName String The user who initiated the database restore process. 17.3.7.9. V1DBRestoreProcessStatus Field Name Required Nullable Type Description Format metadata V1DBRestoreProcessMetadata attemptId String state V1DBRestoreProcessStatusState UNKNOWN, NOT_STARTED, IN_PROGRESS, PAUSED, COMPLETED, resumeInfo DBRestoreProcessStatusResumeInfo error String bytesRead String int64 filesProcessed String int64 17.3.7.10. V1DBRestoreProcessStatusState COMPLETED: successful if error is empty, unsuccessful otherwise Enum Values UNKNOWN NOT_STARTED IN_PROGRESS PAUSED COMPLETED 17.3.7.11. V1DBRestoreRequestHeader Field Name Required Nullable Type Description Format formatName String The name of the database export format. Mandatory. manifest V1DBExportManifest localFile DBRestoreRequestHeaderLocalFileInfo 17.3.7.12. V1GetActiveDBRestoreProcessResponse Field Name Required Nullable Type Description Format activeStatus V1DBRestoreProcessStatus 17.4. CancelRestoreProcess DELETE /v1/db/restore/{id} 17.4.1. Description 17.4.2. Parameters 17.4.2.1. Path Parameters Name Description Required Default Pattern id X null 17.4.3. Return Type Object 17.4.4. Content Type application/json 17.4.5. Responses Table 17.4. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 17.4.6. Samples 17.4.7. Common object reference 17.4.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 17.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 17.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics.
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/dbservice
Chapter 38. Configuring and running standalone Business Central
Chapter 38. Configuring and running standalone Business Central You can use the Business Central standalone JAR file to run Business Central without deploying it to an application server. You can use sample configuration files to run the Business Central standalone JAR file out of the box or you can customize the sampfiles for your requirements. Note This JAR file is supported only when it is run on Red Hat Enterprise Linux. Prerequisites The Red Hat Process Automation Manager 7.13.5 Business Central Standalone ( rhpam-7.13.5-business-central-standalone.jar ) and the Red Hat Process Automation Manager 7.13.5 Add Ons ( rhpam-7.13.5-add-ons.zip ) files have been downloaded from the Software Downloads page for Red Hat Process Automation Manager 7.13, as described in Chapter 32, Downloading the Red Hat Process Automation Manager installation files . Procedure Extract the downloaded rhpam-7.13.5-addons.zip to a temporary directory. This archive includes the rhpam-7.13.5-standalone-sample-configuration.zip file. Extract the rhpam-7.13.5-standalone-sample-configuration.zip file to the directory that contains the rhpam-7.13.5-business-central-standalone.jar file. The rhpam-7.13.5-standalone-sample-configuration.zip file contains the following sample configuration files: application-script.cli : Sample script for adding a user and kie server system properties kie-fs-realm-users : Sample user data You can run the rhpam-7.13.5-business-central-standalone.jar files with the sample data provided in the configuration files or you can customize the data for your requirements. To customize the configuration data, complete the following steps: Edit the application-script.cli file to include an administrative user with admin , user , rest-all , rest-client and kie-server roles. In the following example, replace <USERNAME> and <PASSWORD> with your username and password of the user you want to create. To run the Business Central standalone JAR file, enter the following command: To set application properties when you run the JAR file, include the -D<PROPERTY>=<VALUE> parameter in the command, where <PROPERTY> is the name of a supported application property and <VALUE> is the property value: For example, to run Business Central and connect to KIE Server as the user controllerUser , enter: java -jar rhpam-7.13.5-business-central-standalone.jar \ --cli-script=application-script.cli \ -Dorg.kie.server.user=controllerUser \ -Dorg.kie.server.pwd=controllerUser1234 Doing this enables you to deploy containers to KIE Server. See Appendix A, Business Central system properties for more information. Note To enable user and group management in Business Central, set the value of the org.uberfire.ext.security.management.wildfly.cli.folderPath property to kie-fs-realm-users .
[ "/subsystem=elytron/filesystem-realm=KieRealm:add-identity(identity=<USERNAME>) /subsystem=elytron/filesystem-realm=KieRealm:set-password(identity=<USERNAME>, clear={password=\"<PASSWORD>\"}) /subsystem=elytron/filesystem-realm=KieRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=[\"admin\",\"user\",\"rest-all\",\"rest-client\",\"kie-server\"])", "java -jar rhpam-7.13.5-business-central-standalone.jar --cli-script=application-script.cli", "java -jar rhpam-7.13.5-business-central-standalone.jar --cli-script=application-script.cli -D<PROPERTY>=<VALUE> -D<PROPERTY>=<VALUE>", "java -jar rhpam-7.13.5-business-central-standalone.jar --cli-script=application-script.cli -Dorg.kie.server.user=controllerUser -Dorg.kie.server.pwd=controllerUser1234" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/run-dc-standalone-proc_install-on-jws
Multicluster global hub
Multicluster global hub Red Hat Advanced Cluster Management for Kubernetes 2.12 Multicluster global hub
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/multicluster_global_hub/index
Chapter 42. PodDisruptionBudgetTemplate schema reference
Chapter 42. PodDisruptionBudgetTemplate schema reference Used in: CruiseControlTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate Full list of PodDisruptionBudgetTemplate schema properties A PodDisruptionBudget (PDB) is an OpenShift resource that ensures high availability by specifying the minimum number of pods that must be available during planned maintenance or upgrades. Streams for Apache Kafka creates a PDB for every new StrimziPodSet or Deployment . By default, the PDB allows only one pod to be unavailable at any given time. You can increase the number of unavailable pods allowed by changing the default value of the maxUnavailable property. StrimziPodSet custom resources manage pods using a custom controller that cannot use the maxUnavailable value directly. Instead, the maxUnavailable value is automatically converted to a minAvailable value when creating the PDB resource, which effectively serves the same purpose, as illustrated in the following examples: If there are three broker pods and the maxUnavailable property is set to 1 in the Kafka resource, the minAvailable setting is 2 , allowing one pod to be unavailable. If there are three broker pods and the maxUnavailable property is set to 0 (zero), the minAvailable setting is 3 , requiring all three broker pods to be available and allowing zero pods to be unavailable. Example PodDisruptionBudget template configuration # ... template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1 # ... 42.1. PodDisruptionBudgetTemplate schema properties Property Property type Description metadata MetadataTemplate Metadata to apply to the PodDisruptionBudgetTemplate resource. maxUnavailable integer Maximum number of unavailable pods to allow automatic Pod eviction. A Pod eviction is allowed when the maxUnavailable number of pods or fewer are unavailable after the eviction. Setting this value to 0 prevents all voluntary evictions, so the pods must be evicted manually. Defaults to 1.
[ "template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-PodDisruptionBudgetTemplate-reference
Node.js Runtime Guide
Node.js Runtime Guide Red Hat build of Node.js 22 Use Node.js 22 to develop scalable network applications that run on OpenShift and on stand-alone RHEL Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_node.js/22/html/node.js_runtime_guide/index
Chapter 8. Memory
Chapter 8. Memory This chapter covers memory optimization options for virtualized environments. 8.1. Memory Tuning Tips To optimize memory performance in a virtualized environment, consider the following: Do not allocate more resources to guest than it will use. If possible, assign a guest to a single NUMA node, providing that resources are sufficient on that NUMA node. For more information on using NUMA, see Chapter 9, NUMA .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-memory
Chapter 11. Setting limits on brokers using the Kafka Static Quota plugin
Chapter 11. Setting limits on brokers using the Kafka Static Quota plugin Important The Kafka Static Quota plugin is a Technology Preview only. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Use the Kafka Static Quota plugin to set throughput and storage limits on brokers in your Kafka cluster. You enable the plugin and set limits by adding properties to the Kafka configuration file. You can set a byte-rate threshold and storage quotas to put limits on the clients interacting with your brokers. You can set byte-rate thresholds for producer and consumer bandwidth. The total limit is distributed across all clients accessing the broker. For example, you can set a byte-rate threshold of 40 MBps for producers. If two producers are running, they are each limited to a throughput of 20 MBps. Storage quotas throttle Kafka disk storage limits between a soft limit and hard limit. The limits apply to all available disk space. Producers are slowed gradually between the soft and hard limit. The limits prevent disks filling up too quickly and exceeding their capacity. Full disks can lead to issues that are hard to rectify. The hard limit is the maximum storage limit. Note For JBOD storage, the limit applies across all disks. If a broker is using two 1 TB disks and the quota is 1.1 TB, one disk might fill and the other disk will be almost empty. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Procedure Edit the Kafka configuration properties file. The plugin properties are shown in this example configuration. Example Kafka Static Quota plugin configuration # ... client.quota.callback.class=io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce=1000000 2 client.quota.callback.static.fetch=1000000 3 client.quota.callback.static.storage.soft=400000000000 4 client.quota.callback.static.storage.hard=500000000000 5 client.quota.callback.static.storage.check-interval=5 6 # ... 1 Loads the Kafka Static Quota plugin. 2 Sets the producer byte-rate threshold. 1 MBps in this example. 3 Sets the consumer byte-rate threshold. 1 MBps in this example. 4 Sets the lower soft limit for storage. 400 GB in this example. 5 Sets the higher hard limit for storage. 500 GB in this example. 6 Sets the interval in seconds between checks on storage. 5 seconds in this example. You can set this to 0 to disable the check. Start the Kafka broker with the default configuration file. su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties Verify that the Kafka broker is running. jcmd | grep Kafka
[ "client.quota.callback.class=io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce=1000000 2 client.quota.callback.static.fetch=1000000 3 client.quota.callback.static.storage.soft=400000000000 4 client.quota.callback.static.storage.hard=500000000000 5 client.quota.callback.static.storage.check-interval=5 6", "su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties", "jcmd | grep Kafka" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/proc-setting-broker-limits-str
Chapter 5. Tuning a model by using the Training Operator
Chapter 5. Tuning a model by using the Training Operator To tune a model by using the Kubeflow Training Operator, you configure and run a training job. Optionally, you can use Low-Rank Adaptation (LoRA) to efficiently fine-tune large language models, such as Llama 3. The integration optimizes computational requirements and reduces memory footprint, allowing fine-tuning on consumer-grade GPUs. The solution combines PyTorch Fully Sharded Data Parallel (FSDP) and LoRA to enable scalable, cost-effective model training and inference, enhancing the flexibility and performance of AI workloads within OpenShift environments. 5.1. Configuring the training job Before you can use a training job to tune a model, you must configure the training job. The example training job in this section is based on the IBM and Hugging Face tuning example provided in GitHub . Prerequisites You have logged in to OpenShift. You have access to a data science cluster that is configured to run distributed workloads as described in Managing distributed workloads . You have created a data science project. For information about how to create a project, see Creating a data science project . You have Admin access for the data science project. If you created the project, you automatically have Admin access. If you did not create the project, your cluster administrator must give you Admin access. You have access to a model. You have access to data that you can use to train the model. Procedure In a terminal window, if you are not already logged in to your OpenShift cluster, log in to the OpenShift CLI as shown in the following example: Configure a training job, as follows: Create a YAML file named config_trainingjob.yaml . Add the ConfigMap object definition as follows: Example training-job configuration Optional: To fine-tune with Low Rank Adaptation (LoRA), update the config.json section as follows: Set the peft_method parameter to "lora" . Add the lora_r , lora_alpha , lora_dropout , bias , and target_modules parameters. Example LoRA configuration Optional: To fine-tune with Quantized Low Rank Adaptation (QLoRA), update the config.json section as follows: Set the use_flash_attn parameter to "true" . Set the peft_method parameter to "lora" . Add the LoRA parameters: lora_r , lora_alpha , lora_dropout , bias , and target_modules . Add the QLoRA mandatory parameters: auto_gptq , torch_dtype , and fp16 . If required, add the QLoRA optional parameters: fused_lora and fast_kernels . Example QLoRA configuration Edit the metadata of the training-job configuration as shown in the following table. Table 5.1. Training-job configuration metadata Parameter Value name Name of the training-job configuration namespace Name of your project Edit the parameters of the training-job configuration as shown in the following table. Table 5.2. Training-job configuration parameters Parameter Value model_name_or_path Name of the pre-trained model or the path to the model in the training-job container; in this example, the model name is taken from the Hugging Face web page training_data_path Path to the training data that you set in the training_data.yaml ConfigMap output_dir Output directory for the model save_model_dir Directory where the tuned model is saved num_train_epochs Number of epochs for training; in this example, the training job is set to run 10 times per_device_train_batch_size Batch size, the number of data set examples to process together; in this example, the training job processes 4 examples at a time per_device_eval_batch_size Batch size, the number of data set examples to process together per GPU or TPU core or CPU; in this example, the training job processes 4 examples at a time gradient_accumulation_steps Number of gradient accumulation steps save_strategy How often model checkpoints can be saved; the default value is "epoch" (save model checkpoint every epoch), other possible values are "steps" (save model checkpoint for every training step) and "no" (do not save model checkpoints) save_total_limit Number of model checkpoints to save; omit if save_strategy is set to "no" (no model checkpoints saved) learning_rate Learning rate for the training weight_decay Weight decay to apply lr_scheduler_type Optional: Scheduler type to use; the default value is "linear" , other possible values are "cosine" , "cosine_with_restarts" , "polynomial" , "constant" , and "constant_with_warmup" include_tokens_per_second Optional: Whether or not to compute the number of tokens per second per device for training speed metrics response_template Template formatting for the response dataset_text_field Dataset field for training output, as set in the training_data.yaml config map padding_free Whether to use a technique to process multiple examples in a single batch without adding padding tokens that waste compute resources; if used, this parameter must be set to ["huggingface"] multipack Whether to use a technique for multi-GPU training to balance the number of tokens processed in each device, to minimize waiting time; you can experiment with different values to find the optimum value for your training job use_flash_attn Whether to use flash attention peft_method Tuning method: for full fine-tuning, omit this parameter; for LoRA and QLoRA, set to "lora" ; for prompt tuning, set to "pt" lora_r LoRA: Rank of the low-rank decomposition lora_alpha LoRA: Scale the low-rank matrices to control their influence on the model's adaptations lora_dropout LoRA: Dropout rate applied to the LoRA layers, a regularization technique to prevent overfitting bias LoRA: Whether to adapt bias terms in the model; setting the bias to "none" indicates that no bias terms will be adapted target_modules LoRA: Names of the modules to apply LoRA to; to include all linear layers, set to "all_linear"; optional parameter for some models auto_gptq QLoRA: Sets 4-bit GPTQ-LoRA with AutoGPTQ; when used, this parameter must be set to ["triton_v2"] torch_dtype QLoRA: Tensor datatype; when used, this parameter must be set to float16 fp16 QLoRA: Whether to use half-precision floating-point format; when used, this parameter must be set to true fused_lora QLoRA: Whether to use fused LoRA for more efficient LoRA training; if used, this parameter must be set to ["auto_gptq", true] fast_kernels QLoRA: Whether to use fast cross-entropy, rope, rms loss kernels; if used, this parameter must be set to [true, true, true] Save your changes in the config_trainingjob.yaml file. Apply the configuration to create the training-config object: Create the training data. Note The training data in this simple example is for demonstration purposes only, and is not suitable for production use. The usual method for providing training data is to use persistent volumes. Create a YAML file named training_data.yaml . Add the following ConfigMap object definition: Replace the example namespace value kfto with the name of your project. Replace the example training data with your training data. Save your changes in the training_data.yaml file. Apply the configuration to create the training data: Create a persistent volume claim (PVC), as follows: Create a YAML file named trainedmodelpvc.yaml . Add the following PersistentVolumeClaim object definition: Replace the example namespace value kfto with the name of your project, and update the other parameters to suit your environment. To calculate the storage value, multiply the model size by the number of epochs, and add a little extra as a buffer. Save your changes in the trainedmodelpvc.yaml file. Apply the configuration to create a Persistent Volume Claim (PVC) for the training job: Verification In the OpenShift console, select your project from the Project list. Click ConfigMaps and verify that the training-config and twitter-complaints ConfigMaps are listed. Click Search . From the Resources list, select PersistentVolumeClaim and verify that the trained-model PVC is listed. 5.2. Running the training job You can run a training job to tune a model. The example training job in this section is based on the IBM and Hugging Face tuning example provided here . Prerequisites You have access to a data science cluster that is configured to run distributed workloads as described in Managing distributed workloads . You have created a data science project. For information about how to create a project, see Creating a data science project . You have Admin access for the data science project. If you created the project, you automatically have Admin access. If you did not create the project, your cluster administrator must give you Admin access. You have access to a model. You have access to data that you can use to train the model. You have configured the training job as described in Configuring the training job . Procedure In a terminal window, log in to the OpenShift CLI as shown in the following example: Create a PyTorch training job, as follows: Create a YAML file named pytorchjob.yaml . Add the following PyTorchJob object definition: Replace the example namespace value kfto with the name of your project, and update the other parameters to suit your environment. Edit the parameters of the PyTorch training job, to provide the details for your training job and environment. Save your changes in the pytorchjob.yaml file. Apply the configuration to run the PyTorch training job: Verification In the OpenShift console, select your project from the Project list. Click Workloads Pods and verify that the <training-job-name> -master-0 pod is listed. 5.3. Monitoring the training job When you run a training job to tune a model, you can monitor the progress of the job. The example training job in this section is based on the IBM and Hugging Face tuning example provided here . Prerequisites You have access to a data science cluster that is configured to run distributed workloads as described in Managing distributed workloads . You have created a data science project. For information about how to create a project, see Creating a data science project . You have Admin access for the data science project. If you created the project, you automatically have Admin access. If you did not create the project, your cluster administrator must give you Admin access. You have access to a model. You have access to data that you can use to train the model. You are running the training job as described in Running the training job . Procedure In the OpenShift console, select your project from the Project list. Click Workloads Pods . Search for the pod that corresponds to the PyTorch job, that is, <training-job-name> -master-0 . For example, if the training job name is kfto-demo , the pod name is kfto-demo-master-0 . Click the pod name to open the pod details page. Click the Logs tab to monitor the progress of the job and view status updates, as shown in the following example output: In the example output, the solid blocks indicate progress bars. Verification The <training-job-name> -master-0 pod is running. The Logs tab provides information about the job progress and job status.
[ "oc login <openshift_cluster_url> -u <username> -p <password>", "kind: ConfigMap apiVersion: v1 metadata: name: training-config namespace: kfto data: config.json: | { \"model_name_or_path\": \"bigscience/bloom-560m\", \"training_data_path\": \"/data/input/twitter_complaints.json\", \"output_dir\": \"/data/output/tuning/bloom-twitter\", \"save_model_dir\": \"/mnt/output/model\", \"num_train_epochs\": 10.0, \"per_device_train_batch_size\": 4, \"per_device_eval_batch_size\": 4, \"gradient_accumulation_steps\": 4, \"save_strategy\": \"no\", \"learning_rate\": 1e-05, \"weight_decay\": 0.0, \"lr_scheduler_type\": \"cosine\", \"include_tokens_per_second\": true, \"response_template\": \"\\n### Label:\", \"dataset_text_field\": \"output\", \"padding_free\": [\"huggingface\"], \"multipack\": [16], \"use_flash_attn\": false }", "\"peft_method\": \"lora\", \"lora_r\": 8, \"lora_alpha\": 8, \"lora_dropout\": 0.1, \"bias\": \"none\", \"target_modules\": [\"all-linear\"] }", "\"use_flash_attn\": true, \"peft_method\": \"lora\", \"lora_r\": 8, \"lora_alpha\": 8, \"lora_dropout\": 0.1, \"bias\": \"none\", \"target_modules\": [\"all-linear\"], \"auto_gptq\": [\"triton_v2\"], \"torch_dtype\": float16, \"fp16\": true, \"fused_lora\": [\"auto_gptq\", true], \"fast_kernels\": [true, true, true] }", "oc apply -f config_trainingjob.yaml", "kind: ConfigMap apiVersion: v1 metadata: name: twitter-complaints namespace: kfto data: twitter_complaints.json: | [ {\"Tweet text\":\"@HMRCcustomers No this is my first job\",\"ID\":0,\"Label\":2,\"text_label\":\"no complaint\",\"output\":\"### Text: @HMRCcustomers No this is my first job\\n\\n### Label: no complaint\"}, {\"Tweet text\":\"@KristaMariePark Thank you for your interest! If you decide to cancel, you can call Customer Care at 1-800-NYTIMES.\",\"ID\":1,\"Label\":2,\"text_label\":\"no complaint\",\"output\":\"### Text: @KristaMariePark Thank you for your interest! If you decide to cancel, you can call Customer Care at 1-800-NYTIMES.\\n\\n### Label: no complaint\"}, {\"Tweet text\":\"@EE On Rosneath Arial having good upload and download speeds but terrible latency 200ms. Why is this.\",\"ID\":3,\"Label\":1,\"text_label\":\"complaint\",\"output\":\"### Text: @EE On Rosneath Arial having good upload and download speeds but terrible latency 200ms. Why is this.\\n\\n### Label: complaint\"}, {\"Tweet text\":\"Couples wallpaper, so cute. :) #BrothersAtHome\",\"ID\":4,\"Label\":2,\"text_label\":\"no complaint\",\"output\":\"### Text: Couples wallpaper, so cute. :) #BrothersAtHome\\n\\n### Label: no complaint\"}, {\"Tweet text\":\"@mckelldogs This might just be me, but-- eyedrops? Artificial tears are so useful when you're sleep-deprived and sp... https:\\/\\/t.co\\/WRtNsokblG\",\"ID\":5,\"Label\":2,\"text_label\":\"no complaint\",\"output\":\"### Text: @mckelldogs This might just be me, but-- eyedrops? Artificial tears are so useful when you're sleep-deprived and sp... https:\\/\\/t.co\\/WRtNsokblG\\n\\n### Label: no complaint\"}, {\"Tweet text\":\"@Yelp can we get the exact calculations for a business rating (for example if its 4 stars but actually 4.2) or do we use a 3rd party site?\",\"ID\":6,\"Label\":2,\"text_label\":\"no complaint\",\"output\":\"### Text: @Yelp can we get the exact calculations for a business rating (for example if its 4 stars but actually 4.2) or do we use a 3rd party site?\\n\\n### Label: no complaint\"}, {\"Tweet text\":\"@nationalgridus I have no water and the bill is current and paid. Can you do something about this?\",\"ID\":7,\"Label\":1,\"text_label\":\"complaint\",\"output\":\"### Text: @nationalgridus I have no water and the bill is current and paid. Can you do something about this?\\n\\n### Label: complaint\"}, {\"Tweet text\":\"@JenniferTilly Merry Christmas to as well. You get more stunning every year ��\",\"ID\":9,\"Label\":2,\"text_label\":\"no complaint\",\"output\":\"### Text: @JenniferTilly Merry Christmas to as well. You get more stunning every year ��\\n\\n### Label: no complaint\"} ]", "oc apply -f training_data.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: trained-model namespace: kfto spec: accessModes: - ReadWriteOnce resources: requests: storage: 50Gi", "oc apply -f trainedmodelpvc.yaml", "oc login <openshift_cluster_url> -u <username> -p <password>", "apiVersion: kubeflow.org/v1 kind: PyTorchJob metadata: name: kfto-demo namespace: kfto spec: pytorchReplicaSpecs: Master: replicas: 1 restartPolicy: Never template: spec: containers: - env: - name: SFT_TRAINER_CONFIG_JSON_PATH value: /etc/config/config.json image: 'quay.io/modh/fms-hf-tuning:release' imagePullPolicy: IfNotPresent name: pytorch volumeMounts: - mountPath: /etc/config name: config-volume - mountPath: /data/input name: dataset-volume - mountPath: /data/output name: model-volume volumes: - configMap: items: - key: config.json path: config.json name: training-config name: config-volume - configMap: name: twitter-complaints name: dataset-volume - name: model-volume persistentVolumeClaim: claimName: trained-model runPolicy: suspend: false", "oc apply -f pytorchjob.yaml", "0%| | 0/10 [00:00<?, ?it/s] 10%|█ | 1/10 [01:10<10:32, 70.32s/it] {'loss': 6.9531, 'grad_norm': 1104.0, 'learning_rate': 9e-06, 'epoch': 1.0} 10%|█ | 1/10 [01:10<10:32, 70.32s/it] 20%|██ | 2/10 [01:40<06:13, 46.71s/it] 30%|███ | 3/10 [02:26<05:25, 46.55s/it] {'loss': 2.4609, 'grad_norm': 736.0, 'learning_rate': 7e-06, 'epoch': 2.0} 30%|███ | 3/10 [02:26<05:25, 46.55s/it] 40%|████ | 4/10 [03:23<05:02, 50.48s/it] 50%|█████ | 5/10 [03:41<03:13, 38.66s/it] {'loss': 1.7617, 'grad_norm': 328.0, 'learning_rate': 5e-06, 'epoch': 3.0} 50%|█████ | 5/10 [03:41<03:13, 38.66s/it] 60%|██████ | 6/10 [04:54<03:22, 50.58s/it] {'loss': 3.1797, 'grad_norm': 1016.0, 'learning_rate': 4.000000000000001e-06, 'epoch': 4.0} 60%|██████ | 6/10 [04:54<03:22, 50.58s/it] 70%|███████ | 7/10 [06:03<02:49, 56.59s/it] {'loss': 2.9297, 'grad_norm': 984.0, 'learning_rate': 3e-06, 'epoch': 5.0} 70%|███████ | 7/10 [06:03<02:49, 56.59s/it] 80%|████████ | 8/10 [06:38<01:39, 49.57s/it] 90%|█████████ | 9/10 [07:22<00:48, 48.03s/it] {'loss': 1.4219, 'grad_norm': 684.0, 'learning_rate': 1.0000000000000002e-06, 'epoch': 6.0} 90%|█████████ | 9/10 [07:22<00:48, 48.03s/it]100%|██████████| 10/10 [08:25<00:00, 52.53s/it] {'loss': 1.9609, 'grad_norm': 648.0, 'learning_rate': 0.0, 'epoch': 6.67} 100%|██████████| 10/10 [08:25<00:00, 52.53s/it] {'train_runtime': 508.0444, 'train_samples_per_second': 0.197, 'train_steps_per_second': 0.02, 'train_loss': 2.63125, 'epoch': 6.67} 100%|██████████| 10/10 [08:28<00:00, 52.53s/it]100%|██████████| 10/10 [08:28<00:00, 50.80s/it]" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_distributed_workloads/tuning-a-model-by-using-the-training-operator_distributed-workloads
B.99. upstart
B.99. upstart B.99.1. RHBA-2010:0848 - upstart bug fix update An updated upstart package that fixes a bug in utmp table updating is now available. Upstart is an event-based replacement for the /sbin/init daemon, which handles starting of tasks and services during boot, stopping them during shut down, and supervising them while the system is running. Bug Fix BZ# 636487 When a mingetty session is terminated, the relevant entry in the utmp table is now correctly set to "DEAD_PROCESS". All users are advised to upgrade to this updated package, which resolves this issue. Note that after installing this update, a system reboot is required for the above changes to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/upstart
Chapter 4. Uninstalling OpenShift Data Foundation
Chapter 4. Uninstalling OpenShift Data Foundation 4.1. Uninstalling OpenShift Data Foundation in Internal-attached devices mode Use the steps in this section to uninstall OpenShift Data Foundation. Uninstall Annotations Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster: uninstall.ocs.openshift.io/cleanup-policy: delete uninstall.ocs.openshift.io/mode: graceful The following table provides information on the different values that can used with these annotations: Table 4.1. uninstall.ocs.openshift.io uninstall annotations descriptions Annotation Value Default Behavior cleanup-policy delete Yes Rook cleans up the physical drives and the DataDirHostPath cleanup-policy retain No Rook does not clean up the physical drives and the DataDirHostPath mode graceful Yes Rook and NooBaa pauses the uninstall process until the administrator/user removes the Persistent Volume Claims (PVCs) and Object Bucket Claims (OBCs) mode forced No Rook and NooBaa proceeds with uninstall even if the PVCs/OBCs provisioned using Rook and NooBaa exist respectively Edit the value of the annotation to change the cleanup policy or the uninstall mode. Expected output for both commands: Prerequisites Ensure that the OpenShift Data Foundation cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Data Foundation. Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Data Foundation. If any custom resources (such as custom storage classes, cephblockpools) were created by the admin, they must be deleted by the admin after removing the resources which consumed them. Procedure Delete the volume snapshots that are using OpenShift Data Foundation. List the volume snapshots from all the namespaces. From the output of the command, identify and delete the volume snapshots that are using OpenShift Data Foundation. <VOLUME-SNAPSHOT-NAME> Is the name of the volume snapshot <NAMESPACE> Is the project namespace Delete PVCs and OBCs that are using OpenShift Data Foundation. In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Data Foundation are deleted. If you want to delete the Storage Cluster without deleting the PVCs, you can set the uninstall mode annotation to forced and skip this step. Doing so results in orphan PVCs and OBCs in the system. Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Data Foundation. See Removing monitoring stack from OpenShift Data Foundation Delete OpenShift Container Platform Registry PVCs using OpenShift Data Foundation. Removing OpenShift Container Platform registry from OpenShift Data Foundation Delete OpenShift Container Platform logging PVCs using OpenShift Data Foundation. Removing the cluster logging operator from OpenShift Data Foundation Delete the other PVCs and OBCs provisioned using OpenShift Data Foundation. Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Data Foundation. The script ignores the PVCs that are used internally by OpenShift Data Foundation. Note Omit RGW_PROVISIONER for cloud platforms. Delete the OBCs. <obc-name> Is the name of the OBC <project-name> Is the name of the project Delete the PVCs. <pvc-name> Is the name of the PVC <project-name> Is the name of the project Note Ensure that you have removed any custom backing stores, bucket classes, etc., created in the cluster. Delete the Storage System object and wait for the removal of the associated resources. Check the cleanup pods if the uninstall.ocs.openshift.io/cleanup-policy was set to delete (default) and ensure that their status is Completed . Example output: Confirm that the directory /var/lib/rook is now empty. This directory is empty only if the uninstall.ocs.openshift.io/cleanup-policy annotation was set to delete (default). If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSD devices on all the OpenShift Data Foundation nodes. Create a debug pod and chroot to the host on the storage node. <node-name> Is the name of the node Get Device names and make note of the OpenShift Data Foundation devices. Example output: Remove the mapped device. Important If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find PID of the process which was stuck. Terminate the process using kill command. <PID> Is the process ID Verify that the device name is removed. Delete the namespace and wait till the deletion is complete. You need to switch to another project if openshift-storage is the active project. For example: The project is deleted if the following command returns a NotFound error. Note While uninstalling OpenShift Data Foundation, if namespace is not deleted completely and remains in Terminating state, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated. Delete local storage operator configurations if you have deployed OpenShift Data Foundation using local storage devices. See Removing local storage operator configurations . Unlabel the storage nodes. Remove the OpenShift Data Foundation taint if the nodes were tainted. Confirm all the Persistent volumes (PVs) provisioned using OpenShift Data Foundation are deleted. If there is any PV left in the Released state, delete it. <pv-name> Is the name of the PV Remove the CustomResourceDefinitions . To ensure that OpenShift Data Foundation is uninstalled completely, on the OpenShift Container Platform Web Console, Click Storage . Verify that OpenShift Data Foundation no longer appears under Storage. 4.1.1. Removing local storage operator configurations Use the instructions in this section only if you have deployed OpenShift Data Foundation using local storage devices. Note For OpenShift Data Foundation deployments only using localvolume resources, go directly to step 8. Procedure Identify the LocalVolumeSet and the corresponding StorageClassName being used by OpenShift Data Foundation. Set the variable SC to the StorageClass providing the LocalVolumeSet . List and note the devices to be cleaned up later. Inorder to list the device ids of the disks, please follow the procedure mentioned here, See Find the available storage devices . Example output: Delete the LocalVolumeSet . Delete the local storage PVs for the given StorageClassName . Delete the StorageClassName . Delete the symlinks created by the LocalVolumeSet . Delete LocalVolumeDiscovery . Remove the LocalVolume resources (if any). Use the following steps to remove the LocalVolume resources that were used to provision PVs in the current or OpenShift Data Foundation version. Also, ensure that these resources are not being used by other tenants on the cluster. For each of the local volumes, do the following: Identify the LocalVolume and the corresponding StorageClassName being used by OpenShift Data Foundation. Set the variable LV to the name of the LocalVolume and variable SC to the name of the StorageClass For example: List and note the devices to be cleaned up later. Example output: Delete the local volume resource. Delete the remaining PVs and StorageClasses if they exist. Clean up the artifacts from the storage nodes for that resource. Example output: Wipe the disks for each of the local volumesets or local volumes listed in step 1 and 8 respectively so that they can be reused. List the storage nodes. Example output: Obtain the node console and execute chroot /host command when the prompt appears. Store the disk paths in the DISKS variable within quotes. For the list of disk paths, see step 3 and step 8.c for local volumeset and local volume respectively. Example output: Run sgdisk --zap-all on all the disks. Example output: Exit the shell and repeat for the other nodes. Delete the openshift-local-storage namespace and wait till the deletion is complete. You will need to switch to another project if the openshift-local-storage namespace is the active project. For example: The project is deleted if the following command returns a NotFound error. 4.2. Removing monitoring stack from OpenShift Data Foundation Use this section to clean up the monitoring stack from OpenShift Data Foundation. The Persistent Volume Claims (PVCs) that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace. Prerequisites PVCs are configured to use OpenShift Container Platform monitoring stack. For more information, see configuring monitoring stack . Procedure List the pods and PVCs that are currently running in the openshift-monitoring namespace. Example output: Edit the monitoring configmap . Remove any config sections that reference the OpenShift Data Foundation storage classes as shown in the following example and save it. Before editing After editing In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Data Foundation PVCs. Delete the relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes. <pvc-name> Is the name of the PVC 4.3. Removing OpenShift Container Platform registry from OpenShift Data Foundation Use this section to clean up the OpenShift Container Platform registry from OpenShift Data Foundation. If you want to configure an alternative storage, see Image registry . The Persistent Volume Claims (PVCs) that are created as a part of configuring the OpenShift Container Platform registry are in the openshift-image-registry namespace. Prerequisites The image registry must have been configured to use an OpenShift Data Foundation PVC. Procedure Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section. Before editing After editing In this example, the PVC is called registry-cephfs-rwx-pvc , which is now safe to delete. Delete the PVC. <pvc-name> Is the name of the PVC 4.4. Removing the cluster logging operator from OpenShift Data Foundation Use this section to clean up the cluster logging operator from OpenShift Data Foundation. The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace. Prerequisites The cluster logging instance should have been configured to use the OpenShift Data Foundation PVCs. Procedure Remove the ClusterLogging instance in the namespace. The PVCs in the openshift-logging namespace are now safe to delete. Delete the PVCs. <pvc-name> Is the name of the PVC
[ "oc -n openshift-storage annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/cleanup-policy=\"retain\" --overwrite", "oc -n openshift-storage annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/mode=\"forced\" --overwrite", "storagecluster.ocs.openshift.io/ocs-storagecluster annotated", "oc get volumesnapshot --all-namespaces", "oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>", "#!/bin/bash RBD_PROVISIONER=\"openshift-storage.rbd.csi.ceph.com\" CEPHFS_PROVISIONER=\"openshift-storage.cephfs.csi.ceph.com\" NOOBAA_PROVISIONER=\"openshift-storage.noobaa.io/obc\" RGW_PROVISIONER=\"openshift-storage.ceph.rook.io/bucket\" NOOBAA_DB_PVC=\"noobaa-db\" NOOBAA_BACKINGSTORE_PVC=\"noobaa-default-backing-store-noobaa-pvc\" Find all the OCS StorageClasses OCS_STORAGECLASSES=USD(oc get storageclasses | grep -e \"USDRBD_PROVISIONER\" -e \"USDCEPHFS_PROVISIONER\" -e \"USDNOOBAA_PROVISIONER\" -e \"USDRGW_PROVISIONER\" | awk '{print USD1}') List PVCs in each of the StorageClasses for SC in USDOCS_STORAGECLASSES do echo \"======================================================================\" echo \"USDSC StorageClass PVCs and OBCs\" echo \"======================================================================\" oc get pvc --all-namespaces --no-headers 2>/dev/null | grep USDSC | grep -v -e \"USDNOOBAA_DB_PVC\" -e \"USDNOOBAA_BACKINGSTORE_PVC\" oc get obc --all-namespaces --no-headers 2>/dev/null | grep USDSC echo done", "oc delete obc <obc-name> -n <project-name>", "oc delete pvc <pvc-name> -n <project-name>", "oc delete -n openshift-storage storagesystem --all --wait=true", "oc get pods -n openshift-storage | grep -i cleanup", "NAME READY STATUS RESTARTS AGE cluster-cleanup-job-<xx> 0/1 Completed 0 8m35s cluster-cleanup-job-<yy> 0/1 Completed 0 8m35s cluster-cleanup-job-<zz> 0/1 Completed 0 8m35s", "for i in USD(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/USD{i} -- chroot /host ls -l /var/lib/rook; done", "oc debug node/ <node-name>", "chroot /host", "dmsetup ls", "ocs-deviceset-0-data-0-57snx-block-dmcrypt (253:1)", "cryptsetup luksClose --debug --verbose ocs-deviceset-0-data-0-57snx-block-dmcrypt", "ps -ef | grep crypt", "kill -9 <PID>", "dmsetup ls", "oc project default", "oc delete project openshift-storage --wait=true --timeout=5m", "oc get project openshift-storage", "oc label nodes --all cluster.ocs.openshift.io/openshift-storage-", "oc label nodes --all topology.rook.io/rack-", "oc adm taint nodes --all node.ocs.openshift.io/storage-", "oc get pv", "oc delete pv <pv-name>", "oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5m", "oc get localvolumesets.local.storage.openshift.io -n openshift-local-storage", "export SC=\"<StorageClassName>\"", "/dev/disk/by-id/scsi-360050763808104bc28000000000000eb /dev/disk/by-id/scsi-360050763808104bc28000000000000ef /dev/disk/by-id/scsi-360050763808104bc28000000000000f3", "oc delete localvolumesets.local.storage.openshift.io <name-of-volumeset> -n openshift-local-storage", "oc get pv | grep USDSC | awk '{print USD1}'| xargs oc delete pv", "oc delete sc USDSC", "[[ ! -z USDSC ]] && for i in USD(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/USD{i} -- chroot /host rm -rfv /mnt/local-storage/USD{SC}/; done", "oc delete localvolumediscovery.local.storage.openshift.io/auto-discover-devices -n openshift-local-storage", "oc get localvolume.local.storage.openshift.io -n openshift-local-storage", "LV=local-block SC=localblock", "oc get localvolume -n openshift-local-storage USDLV -o jsonpath='{ .spec.storageClassDevices[].devicePaths[] }{\"\\n\"}'", "/dev/sdb /dev/sdc /dev/sdd /dev/sde", "oc delete localvolume -n openshift-local-storage --wait=true USDLV", "oc delete pv -l storage.openshift.com/local-volume-owner-name=USD{LV} --wait --timeout=5m oc delete storageclass USDSC --wait --timeout=5m", "[[ ! -z USDSC ]] && for i in USD(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/USD{i} -- chroot /host rm -rfv /mnt/local-storage/USD{SC}/; done", "Starting pod/node-xxx-debug To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod Starting pod/node-yyy-debug To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod Starting pod/node-zzz-debug To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod", "get nodes -l cluster.ocs.openshift.io/openshift-storage=", "NAME STATUS ROLES AGE VERSION node-xxx Ready worker 4h45m v1.18.3+6c42de8 node-yyy Ready worker 4h46m v1.18.3+6c42de8 node-zzz Ready worker 4h45m v1.18.3+6c42de8", "oc debug node/node-xxx Starting pod/node-xxx-debug ... To use host binaries, run `chroot /host` Pod IP: w.x.y.z If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host", "sh-4.4# DISKS=\"/dev/disk/by-id/scsi-360050763808104bc28000000000000eb /dev/disk/by-id/scsi-360050763808104bc28000000000000ef /dev/disk/by-id/scsi-360050763808104bc28000000000000f3 \" or sh-4.2# DISKS=\"/dev/sdb /dev/sdc /dev/sdd /dev/sde \".", "sh-4.4# for disk in USDDISKS; do sgdisk --zap-all USDdisk;done", "Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities.", "sh-4.4# exit exit sh-4.2# exit exit Removing debug pod", "oc project default oc delete project openshift-local-storage --wait=true --timeout=5m", "oc get project openshift-local-storage", "oc get pod,pvc -n openshift-monitoring", "NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", ". . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: \"2019-12-02T07:47:29Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"22110\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .", ". . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: \"2019-11-21T13:07:05Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"404352\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .", "oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m", "oc edit configs.imageregistry.operator.openshift.io", ". . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .", ". . . storage: emptyDir: {} . . .", "oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m", "oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m", "oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_ibm_z_infrastructure/uninstalling_openshift_data_foundation
8.5. Multicast Considerations
8.5. Multicast Considerations When multiple applications listen to a multicast group, the kernel code that handles multicast frames is required by design to duplicate network data for each individual socket. This duplication is time-consuming and occurs in the softirq context. Adding multiple listeners on a single multicast group therefore has a direct impact on the softirq context's execution time. Adding a listener to a multicast group implies that the kernel must create an additional copy for each frame received for that group. The effect of this is minimal at low traffic volume and small listener numbers. However, when multiple sockets listen to a high-traffic multicast group, the increased execution time of the softirq context can lead to frame drops at both the network card and the socket queue. Increased softirq runtimes translate to reduced opportunity for applications to run on heavily-loaded systems, so the rate at which multicast frames are lost increases as the number of applications listening to a high-volume multicast group increases. Resolve this frame loss by optimizing your socket queues and NIC hardware buffers, as described in Section 8.4.2, "Socket Queue" or Section 8.4.1, "NIC Hardware Buffer" . Alternatively, you can optimize an application's socket use; to do so, configure the application to control a single socket and disseminate the received network data quickly to other user-space processes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-network-multicast
function::ns_tid
function::ns_tid Name function::ns_tid - Returns the thread ID of a target process as seen in a pid namespace Synopsis Arguments None Description This function returns the thread ID of a target process as seen in the target pid namespace if provided, or the stap process namespace.
[ "ns_tid:long()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ns-tid
Chapter 7. Introduction to RHEL system roles
Chapter 7. Introduction to RHEL system roles By using RHEL system roles, you can remotely manage the system configurations of multiple RHEL systems across major versions of RHEL. Important terms and concepts The following describes important terms and concepts in an Ansible environment: Control node A control node is the system from which you run Ansible commands and playbooks. Your control node can be an Ansible Automation Platform, Red Hat Satellite, or a RHEL 9, 8, or 7 host. For more information, see Preparing a control node on RHEL 9 . Managed node Managed nodes are the servers and network devices that you manage with Ansible. Managed nodes are also sometimes called hosts. Ansible does not have to be installed on managed nodes. For more information, see Preparing a managed node . Ansible playbook In a playbook, you define the configuration you want to achieve on your managed nodes or a set of steps for the system on the managed node to perform. Playbooks are Ansible's configuration, deployment, and orchestration language. Inventory In an inventory file, you list the managed nodes and specify information such as IP address for each managed node. In the inventory, you can also organize the managed nodes by creating and nesting groups for easier scaling. An inventory file is also sometimes called a hostfile. Available roles on a Red Hat Enterprise Linux 9 control node On a Red Hat Enterprise Linux 9 control node, the rhel-system-roles package provides the following roles: Role name Role description Chapter title certificate Certificate Issuance and Renewal Requesting certificates by using RHEL system roles cockpit Web console Installing and configuring web console with the cockpit RHEL system role crypto_policies System-wide cryptographic policies Setting a custom cryptographic policy across systems firewall Firewalld Configuring firewalld by using system roles ha_cluster HA Cluster Configuring a high-availability cluster by using system roles kdump Kernel Dumps Configuring kdump by using RHEL system roles kernel_settings Kernel Settings Using Ansible roles to permanently configure kernel parameters logging Logging Using the logging system role metrics Metrics (PCP) Monitoring performance by using RHEL system roles network Networking Using the network RHEL system role to manage InfiniBand connections nbde_client Network Bound Disk Encryption client Using the nbde_client and nbde_server system roles nbde_server Network Bound Disk Encryption server Using the nbde_client and nbde_server system roles postfix Postfix Variables of the postfix role in system roles postgresql PostgreSQL Installing and configuring PostgreSQL by using the postgresql RHEL system role selinux SELinux Configuring SELinux by using system roles ssh SSH client Configuring secure communication with the ssh system roles sshd SSH server Configuring secure communication with the ssh system roles storage Storage Managing local storage by using RHEL system roles tlog Terminal Session Recording Configuring a system for session recording by using the tlog RHEL system role timesync Time Synchronization Configuring time synchronization by using RHEL system roles vpn VPN Configuring VPN connections with IPsec by using the vpn RHEL system role Additional resources Automating system administration by using RHEL system roles Red Hat Enterprise Linux (RHEL) system roles /usr/share/ansible/roles/rhel-system-roles. <role_name> /README.md file /usr/share/doc/rhel-system-roles/ <role_name> / directory
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_basic_system_settings/intro-to-rhel-system-roles_configuring-basic-system-settings
Chapter 12. NetworkAttachmentDefinition [k8s.cni.cncf.io/v1]
Chapter 12. NetworkAttachmentDefinition [k8s.cni.cncf.io/v1] Description NetworkAttachmentDefinition is a CRD schema specified by the Network Plumbing Working Group to express the intent for attaching pods to one or more logical or physical networks. More information available at: https://github.com/k8snetworkplumbingwg/multi-net-spec Type object 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NetworkAttachmentDefinition spec defines the desired state of a network attachment 12.1.1. .spec Description NetworkAttachmentDefinition spec defines the desired state of a network attachment Type object Property Type Description config string NetworkAttachmentDefinition config is a JSON-formatted CNI configuration 12.2. API endpoints The following API endpoints are available: /apis/k8s.cni.cncf.io/v1/network-attachment-definitions GET : list objects of kind NetworkAttachmentDefinition /apis/k8s.cni.cncf.io/v1/namespaces/{namespace}/network-attachment-definitions DELETE : delete collection of NetworkAttachmentDefinition GET : list objects of kind NetworkAttachmentDefinition POST : create a NetworkAttachmentDefinition /apis/k8s.cni.cncf.io/v1/namespaces/{namespace}/network-attachment-definitions/{name} DELETE : delete a NetworkAttachmentDefinition GET : read the specified NetworkAttachmentDefinition PATCH : partially update the specified NetworkAttachmentDefinition PUT : replace the specified NetworkAttachmentDefinition 12.2.1. /apis/k8s.cni.cncf.io/v1/network-attachment-definitions Table 12.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind NetworkAttachmentDefinition Table 12.2. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinitionList schema 401 - Unauthorized Empty 12.2.2. /apis/k8s.cni.cncf.io/v1/namespaces/{namespace}/network-attachment-definitions Table 12.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 12.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of NetworkAttachmentDefinition Table 12.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 12.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind NetworkAttachmentDefinition Table 12.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 12.8. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinitionList schema 401 - Unauthorized Empty HTTP method POST Description create a NetworkAttachmentDefinition Table 12.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.10. Body parameters Parameter Type Description body NetworkAttachmentDefinition schema Table 12.11. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinition schema 201 - Created NetworkAttachmentDefinition schema 202 - Accepted NetworkAttachmentDefinition schema 401 - Unauthorized Empty 12.2.3. /apis/k8s.cni.cncf.io/v1/namespaces/{namespace}/network-attachment-definitions/{name} Table 12.12. Global path parameters Parameter Type Description name string name of the NetworkAttachmentDefinition namespace string object name and auth scope, such as for teams and projects Table 12.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a NetworkAttachmentDefinition Table 12.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 12.15. Body parameters Parameter Type Description body DeleteOptions schema Table 12.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified NetworkAttachmentDefinition Table 12.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 12.18. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified NetworkAttachmentDefinition Table 12.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.20. Body parameters Parameter Type Description body Patch schema Table 12.21. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinition schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified NetworkAttachmentDefinition Table 12.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.23. Body parameters Parameter Type Description body NetworkAttachmentDefinition schema Table 12.24. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinition schema 201 - Created NetworkAttachmentDefinition schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_apis/networkattachmentdefinition-k8s-cni-cncf-io-v1
probe::nfsd.write
probe::nfsd.write Name probe::nfsd.write - NFS server writing data to a file for client Synopsis nfsd.write Values offset the offset of file fh file handle (the first part is the length of the file handle) vlen read blocks file argument file, indicates if the file has been opened. client_ip the ip address of client count read bytes size read bytes vec struct kvec, includes buf address in kernel address and length of each buffer
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfsd-write
Chapter 3. Enhancements
Chapter 3. Enhancements The enhancements added in this release are outlined below. 3.1. Kafka 2.8.0 enhancements For an overview of the enhancements introduced with Kafka 2.8.0, refer to the Kafka 2.8.0 Release Notes . 3.2. Kafka Connect build configuration updates You can use build configuration so that AMQ Streams automatically builds a container image with the connector plugins you require for your data connections. A dedicated service account is now created with Kafka Connect build pods. The service account is distinct from Kafka Connect itself. Before this release, the build ran under the default service account. Having its own identity is useful when specifying authentication and access. Kafka Connect build now also works behind proxies if standard HTTP proxies ( HTTP_PROXY , HTTPS_PROXY , and NO_PROXY ) are set as environment variables for the AMQ Streams deployment. See: Creating a new container image automatically using AMQ Streams Build schema reference 3.3. Kubernetes Configuration Provider for external configuration data Use the Kubernetes Configuration Provider plugin to load configuration data from external sources. You can load data from OpenShift Secrets or ConfigMaps. The provider operates independently of AMQ Streams. It loads the data without needing to restart the Kafka component, even when using a new Secret or ConfigMap. You can use it to load configuration data for all Kafka components, including producers and consumers. Use it, for example, to provide the credentials for a Kafka Connect instance hosting multiple connectors without disruption See Loading configuration values from external sources . 3.4. Log filters with markers If you are using a ConfigMap to configure the (log4j2) logging levels for AMQ Streams Operators, you can now define logging filters to limit what is returned in the log. You add the filter properties to the ConfigMap. The filters use markers to specify what to include in the log. You specify a kind, namespace and name for the marker. For example, if a Kafka cluster is failing, you can isolate the logs by specifying the kind as Kafka , and use the namespace and name of the failing cluster. This example shows a marker filter for a Kafka cluster named my-kafka-cluster . Basic logging filter configuration appender.console.filter.filter1.type=MarkerFilter 1 appender.console.filter.filter1.onMatch=ACCEPT 2 appender.console.filter.filter1.onMismatch=DENY 3 appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) 4 1 The MarkerFilter type compares a specified marker for filtering. 2 The onMatch property accepts the log if the marker matches. 3 The onMismatch property rejects the log if the marker does not match. 4 The marker used for filtering is in the format KIND(NAMESPACE/NAME-OF-RESOURCE) . See Adding logging filters to Operators 3.5. OAuth 2.0 authentication enhancements Configure audience and scope You can now configure the clientAudience and clientScope properties when obtaining a token from the authorization server. The property values are passed to the token endpoint as audience and scope parameters. Both properties are configured in the OAuth 2.0 authentication listener configuration in the Kafka custom resource. Use these properties in the following scenarios: When obtaining an access token for inter-broker authentication In the name of a client for OAuth 2.0 over PLAIN client authentication, using a clientId and secret Specifically, the audience and scope can now be included in the request when the PLAIN callback first exchanges the clientID (as the username) and the secret (as the password) with the authorization server in order to obtain an access token. These properties affect whether a client can obtain a token and the content of the token. They do not affect token validation rules imposed by the listener. Example configuration for clientAudience and clientScope properties # ... authentication: type: oauth # ... clientAudience: AUDIENCE clientScope: SCOPE Authorization servers sometimes provide aud (audience) claims in JWT access tokens. When audience checks are enabled, the Kafka broker rejects tokens that do not contain the broker's clientId in their aud claims. To enable audience checks, set the checkAudience option to true in the oauth listener configuration. Audience checks are disabled by default. See Configuring OAuth 2.0 support for Kafka brokers and KafkaListenerAuthenticationOAuth schema reference Specify audience for Kafka Connect and Kafka Bridge You can now specify the audience option when configuring OAuth client authentication for Kafka Connect or the Kafka Bridge in their respective custom resources. Previously, only the scope option was supported for these resources. See KafkaClientAuthenticationOAuth schema reference Token endpoint not required with OAuth 2.0 over PLAIN The tokenEndpointUri option is no longer required when using the "client ID and secret" method for OAuth 2.0 over PLAIN authentication. Example OAuth 2.0 over PLAIN configuration with token endpoint URI specified # ... authentication: type: oauth # ... enablePlain: true tokenEndpointUri: https://OAUTH-SERVER-ADDRESS/auth/realms/external/protocol/openid-connect/token If the tokenEndpointUri is not specified, the listener treats the: username parameter as the account name password parameter as the raw access token, which is passed to the authorization server for validation (the same behavior as for OAUTHBEARER authentication) The behavior of the "long-lived access token" method for OAuth 2.0 over PLAIN authentication is unchanged. The tokenEndpointUri is not required when using this method. See OAuth 2.0 authentication mechanisms 3.6. User quotas The handling of user quotas through the User Operator is no longer managed by ZooKeeper. Instead, user quotas are handled through the API. Additionally, support has been added for Kafka's mutation rate quota. This quota limits the number of partition mutations allowed per second. The quota prevents Kafka clusters from being overwhelmed by concurrent topic operations. The number of partition mutations includes the following types of user requests: Creating partitions for a new topic Adding partitions to an existing topic Deleting partitions from a topic You can configure a mutation rate quota to control the rate at which mutations are accepted for user requests. The rate is accumulated from the number of partitions created or deleted. Use the controllerMutationRate option to apply the quota to the Kafka user. In this example, 10 partition creation and deletion operations are allowed per second. Example KafkaUser configuration with user quotas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... quotas: #... controllerMutationRate: 10 1 See User quotas 3.7. Pause reconciliation of custom resources You can pause the reconciliation of custom resources managed by AMQ Streams operators to perform fixes or make updates. You can also pause reconciliation of custom resources you are creating. The custom resource is created, but it is ignored. You add an annotation to the custom resource to pause it. Example annotation for pausing reconciliation oc annotate KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE strimzi.io/pause-reconciliation="true" It is now possible to pause reconciliation of KafkaTopic custom resources. See Pausing reconciliation of custom resources 3.8. Kafka Exporter update The custom version of Kafka Exporter that is provided with AMQ Streams has been updated to version 1.3.1. AMQ Streams includes an example Grafana dashboard for Kafka Exporter in the examples provided ( examples/metrics/grafana-dashboards/strimzi-kafka-exporter.json ). See Add Kafka Exporter 3.9. Kafka Connect build uses hashes to name download files You can configure a KafkaConnect resource to create a custom Kafka Connect container image. Using spec.build configuration automates the process. You configure plugins to specify the implementation artifacts and output to reference the container registry that stores the image. AMQ Streams downloads and adds the connector plugins into the new container image. The build process now uses the URL hash to name downloaded artifact files. Previously, it used the last segment of the download URL. If your plugin artifact requires a specific name, you can use a new other artifact type and its fileName field. Example naming of a plugin artifact apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: other url: https://my-domain.tld/my-other-file.ext sha512sum: 589...ab4 fileName: name-of-file.ext #... See Build schema reference
[ "appender.console.filter.filter1.type=MarkerFilter 1 appender.console.filter.filter1.onMatch=ACCEPT 2 appender.console.filter.filter1.onMismatch=DENY 3 appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) 4", "authentication: type: oauth # clientAudience: AUDIENCE clientScope: SCOPE", "authentication: type: oauth # enablePlain: true tokenEndpointUri: https://OAUTH-SERVER-ADDRESS/auth/realms/external/protocol/openid-connect/token", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # quotas: # controllerMutationRate: 10 1", "annotate KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE strimzi.io/pause-reconciliation=\"true\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: other url: https://my-domain.tld/my-other-file.ext sha512sum: 589...ab4 fileName: name-of-file.ext #" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_amq_streams_1.8_on_openshift/enhancements-str
Index
Index A arptables, Direct Routing Using arptables D direct routing and arptables, Direct Routing Using arptables and firewalld, Direct Routing Using firewalld F firewalld, Direct Routing Using firewalld FTP, Configuring FTP (see also Load Balancer ) H HAProxy, haproxy HAProxy and Keepalived, keepalived and haproxy J job scheduling, Keepalived , keepalived Scheduling Overview K Keepalived configuration, A Basic Keepalived configuration configuration file, Creating the keapalived.conf file initial configuration, Initial Load Balancer Configuration with Keepalived job scheduling, keepalived Scheduling Overview scheduling, job, keepalived Scheduling Overview Keepalived configuration Direct Routing, Keepalived Direct Routing Configuration keepalived daemon, keepalived keepalived.conf, Creating the keapalived.conf file Keepalivedd LVS routers primary node, Initial Load Balancer Configuration with Keepalived L least connections (see job scheduling, Keepalived ) Load Balancer direct routing and arptables, Direct Routing Using arptables and firewalld, Direct Routing Using firewalld requirements, hardware, Direct Routing , Load Balancer Using Direct Routing requirements, network, Direct Routing , Load Balancer Using Direct Routing requirements, software, Direct Routing , Load Balancer Using Direct Routing HAProxy, haproxy HAProxy and Keepalived, keepalived and haproxy Keepalived, A Basic Keepalived configuration , Keepalived Direct Routing Configuration keepalived daemon, keepalived multi-port services, Multi-port Services and Load Balancer FTP, Configuring FTP NAT routing requirements, hardware, The NAT Load Balancer Network requirements, network, The NAT Load Balancer Network requirements, software, The NAT Load Balancer Network packet forwarding, Turning on Packet Forwarding and Nonlocal Binding routing methods NAT, Routing Methods routing prerequisites, Configuring Network Interfaces for Load Balancer with NAT three-tier, A Three-Tier keepalived Load Balancer Configuration LVS NAT routing enabling, Enabling NAT Routing on the LVS Routers overview of, Load Balancer Overview real servers, Load Balancer Overview M multi-port services, Multi-port Services and Load Balancer (see also Load Balancer ) N NAT enabling, Enabling NAT Routing on the LVS Routers routing methods, Load Balancer , Routing Methods network address translation (see NAT) P packet forwarding, Turning on Packet Forwarding and Nonlocal Binding (see also Load Balancer) R real servers configuring services, Configuring Services on the Real Servers round robin (see job scheduling, Keepalived ) routing prerequisites for Load Balancer , Configuring Network Interfaces for Load Balancer with NAT S scheduling, job (Keepalived ), keepalived Scheduling Overview W weighted least connections (see job scheduling, Keepalived ) weighted round robin (see job scheduling, Keepalived )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/ix01
10.4. Managing Transport and SSL Settings Using Management CLI
10.4. Managing Transport and SSL Settings Using Management CLI To manage JBoss Data Virtualization transport settings, you can use the same commands as those used for the base JBoss Data Virtualization settings, specifying a particular transport in the command. For example: Available transport names are listed under transport when you run the following command to output current JBoss Data Virtualization settings:
[ "/subsystem=teiid/transport= TRANSPORT_NAME :read-resource", "/subsystem=teiid:read-resource" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/managing_transport_and_ssl_settings_using_management_cli
Chapter 13. Introduction
Chapter 13. Introduction Abstract This chapter provides an overview of all the expression languages supported by Apache Camel. 13.1. Overview of the Languages Table of expression and predicate languages Table 13.1, "Expression and Predicate Languages" gives an overview of the different syntaxes for invoking expression and predicate languages. Table 13.1. Expression and Predicate Languages Language Static Method Fluent DSL Method XML Element Annotation Artifact See Bean Integration in the Apache Camel Development Guide on the customer portal. bean() EIP ().method() method @Bean Camel core Chapter 14, Constant constant() EIP ().constant() constant @Constant Camel core Chapter 15, EL el() EIP ().el() el @EL camel-juel Chapter 17, Groovy groovy() EIP ().groovy() groovy @Groovy camel-groovy Chapter 18, Header header() EIP ().header() header @Header Camel core Chapter 19, JavaScript javaScript() EIP ().javaScript() javaScript @JavaScript camel-script Chapter 20, JoSQL sql() EIP ().sql() sql @SQL camel-josql Chapter 21, JsonPath None EIP ().jsonpath() jsonpath @JsonPath camel-jsonpath Chapter 22, JXPath None EIP ().jxpath() jxpath @JXPath camel-jxpath Chapter 23, MVEL mvel() EIP ().mvel() mvel @MVEL camel-mvel Chapter 24, The Object-Graph Navigation Language(OGNL) ognl() EIP ().ognl() ognl @OGNL camel-ognl Chapter 25, PHP (DEPRECATED) php() EIP ().php() php @PHP camel-script Chapter 26, Exchange Property property() EIP ().property() property @Property Camel core Chapter 27, Python (DEPRECATED) python() EIP ().python() python @Python camel-script Chapter 28, Ref ref() EIP ().ref() ref N/A Camel core Chapter 29, Ruby (DEPRECATED) ruby() EIP ().ruby() ruby @Ruby camel-script Chapter 30, The Simple Language / Chapter 16, The File Language simple() EIP ().simple() simple @Simple Camel core Chapter 31, SpEL spel() EIP ().spel() spel @SpEL camel-spring Chapter 32, The XPath Language xpath() EIP ().xpath() xpath @XPath Camel core Chapter 33, XQuery xquery() EIP ().xquery() xquery @XQuery camel-saxon 13.2. How to Invoke an Expression Language Prerequisites Before you can use a particular expression language, you must ensure that the required JAR files are available on the classpath. If the language you want to use is not included in the Apache Camel core, you must add the relevant JARs to your classpath. If you are using the Maven build system, you can modify the build-time classpath simply by adding the relevant dependency to your POM file. For example, if you want to use the Ruby language, add the following dependency to your POM file: If you are going to deploy your application in a Red Hat Fuse OSGi container, you also need to ensure that the relevant language features are installed (features are named after the corresponding Maven artifact). For example, to use the Groovy language in the OSGi container, you must first install the camel-groovy feature by entering the following OSGi console command: Note If you are using an expression or predicate in the routes, refer the value as an external resource by using resource:classpath:path or resource:file:path . For example, resource:classpath:com/foo/myscript.groovy . Camel on EAP deployment The camel-groovy component is supported by the Camel on EAP (Wildfly Camel) framework, which offers a simplified deployment model on the Red Hat JBoss Enterprise Application Platform (JBoss EAP) container. Approaches to invoking As shown in Table 13.1, "Expression and Predicate Languages" , there are several different syntaxes for invoking an expression language, depending on the context in which it is used. You can invoke an expression language: As a static method As a fluent DSL method As an XML element As an annotation As a static method Most of the languages define a static method that can be used in any context where an org.apache.camel.Expression type or an org.apache.camel.Predicate type is expected. The static method takes a string expression (or predicate) as its argument and returns an Expression object (which is usually also a Predicate object). For example, to implement a content-based router that processes messages in XML format, you could route messages based on the value of the /order/address/countryCode element, as follows: As a fluent DSL method The Java fluent DSL supports another style of invoking expression languages. Instead of providing the expression as an argument to an Enterprise Integration Pattern (EIP), you can provide the expression as a sub-clause of the DSL command. For example, instead of invoking an XPath expression as filter(xpath(" Expression ")) , you can invoke the expression as, filter().xpath(" Expression ") . For example, the preceding content-based router can be re-implemented in this style of invocation, as follows: As an XML element You can also invoke an expression language in XML, by putting the expression string inside the relevant XML element. For example, the XML element for invoking XPath in XML is xpath (which belongs to the standard Apache Camel namespace). You can use XPath expressions in a XML DSL content-based router, as follows: Alternatively, you can specify a language expression using the language element, where you specify the name of the language in the language attribute. For example, you can define an XPath expression using the language element as follows: As an annotation Language annotations are used in the context of bean integration . The annotations provide a convenient way of extracting information from a message or header and then injecting the extracted data into a bean's method parmeters. For example, consider the bean, myBeanProc , which is invoked as a predicate of the filter() EIP. If the bean's checkCredentials method returns true , the message is allowed to proceed; but if the method returns false , the message is blocked by the filter. The filter pattern is implemented as follows: The implementation of the MyBeanProcessor class exploits the @XPath annotation to extract the username and password from the underlying XML message, as follows: The @XPath annotation is placed just before the parameter into which it gets injected. Notice how the XPath expression explicitly selects the text node, by appending /text() to the path, which ensures that just the content of the element is selected, not the enclosing tags. As a Camel endpoint URI Using the Camel Language component, you can invoke a supported language in an endpoint URI. There are two alternative syntaxes. To invoke a language script stored in a file (or other resource type defined by Scheme ), use the following URI syntax: Where the scheme can be file: , classpath: , or http: . For example, the following route executes the mysimplescript.txt from the classpath: To invoke an embedded language script, use the following URI syntax: For example, to run the Simple language script stored in the script string: For more details about the Language component, see Language in the Apache Camel Component Reference Guide .
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-groovy</artifactId> <!-- Use the same version as your Camel core version --> <version>USD{camel.version}</version> </dependency>", "karaf@root> features:install camel-groovy", "from(\" SourceURL \") .choice .when( xpath(\"/order/address/countryCode = 'us'\") ) .to(\"file://countries/us/\") .when( xpath(\"/order/address/countryCode = 'uk'\") ) .to(\"file://countries/uk/\") .otherwise() .to(\"file://countries/other/\") .to(\" TargetURL \");", "from(\" SourceURL \") .choice .when(). xpath(\"/order/address/countryCode = 'us'\") .to(\"file://countries/us/\") .when(). xpath(\"/order/address/countryCode = 'uk'\") .to(\"file://countries/uk/\") .otherwise() .to(\"file://countries/other/\") .to(\" TargetURL \");", "<from uri=\"file://input/orders\"/> <choice> <when> <xpath>/order/address/countryCode = 'us'</xpath> <to uri=\"file://countries/us/\"/> </when> <when> <xpath>/order/address/countryCode = 'uk'</xpath> <to uri=\"file://countries/uk/\"/> </when> <otherwise> <to uri=\"file://countries/other/\"/> </otherwise> </choice>", "<language language=\"xpath\">/order/address/countryCode = 'us'</language>", "// Java MyBeanProcessor myBeanProc = new MyBeanProcessor(); from(\" SourceURL \") .filter().method(myBeanProc, \"checkCredentials\") .to(\" TargetURL \");", "// Java import org.apache.camel.language.XPath; public class MyBeanProcessor { boolean void checkCredentials( @XPath(\"/credentials/username/text()\") String user, @XPath(\"/credentials/password/text()\") String pass ) { // Check the user/pass credentials } }", "language:// LanguageName :resource: Scheme : Location [? Options ]", "from(\"direct:start\") .to(\"language:simple:classpath:org/apache/camel/component/language/mysimplescript.txt\") .to(\"mock:result\");", "language:// LanguageName [: Script ][? Options ]", "String script = URLEncoder.encode(\"Hello USD{body}\", \"UTF-8\"); from(\"direct:start\") .to(\"language:simple:\" + script) .to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/intro
5.2.2. Block-Based Addressing
5.2.2. Block-Based Addressing Block-based addressing is much more straightforward than geometry-based addressing. With block-based addressing, every data block is given a unique number. This number is passed from the computer to the mass storage device, which then internally performs the conversion to the geometry-based address required by the device's control circuitry. Because the conversion to a geometry-based address is always done by the device itself, it is always consistent, eliminating the problem inherent with giving the device geometry-based addressing.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-storage-data-addr-lba
8.103. linuxptp
8.103. linuxptp 8.103.1. RHBA-2013:1564 - linuxptp bug fix and enhancement update Updated linuxptp packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The Linux PTP project is a software implementation of the Precision Time Protocol (PTP) according to IEEE standard 1588 for Linux. These packages provide a robust implementation of the standard and use the most relevant and modern Application Programming Interfaces (API) offered by the Linux kernel. Supporting legacy APIs and other platforms is not a goal. Note The linuxptp package has been upgraded to upstream version 1.3, which provides a number of bug fixes and enhancements over the version. (BZ# 916787 ) Bug Fixes BZ# 910966 Previously, the ptp4l application did not limit the frequency correction of the clock. As a consequence, with some PTP clocks, when ptp4l was correcting a large offset, it could set the frequency correction to -100%, which effectively stopped the clock. This update adds a new option to configure the maximum allowed correction of the clock, which, by default is 90%. As a result, the synchronized clock never stops unless ptp4l is allowed to adjust the clock by 100%. BZ# 910974 Previously, the phc2sys utility was not able to read information about the current Coordinated Universal Time (UTC) offset and pending leap seconds from the ptp4l application. As a consequence, the user had to specify the UTC offset manually and the leap seconds were not handled. This update adds a new option to phc2sys to wait for ptp4l to synchronize the PTP clock and to periodically read the current UTC offset and information about pending leap seconds. As a result, the phc2sys utility uses the correct UTC offset and leap seconds are handled properly. BZ# 991332 , BZ# 985531 Previously, the ptp4l application did not correctly check if a cached follow-up or a synchronized message could be associated with a newly received synchronization or a follow-up message. As a consequence, the messages could be associated incorrectly, which could result in a large offset and disturbed synchronization of the clock. The code which associates the synchronization and follow-up messages has been fixed. As a result, there are no longer disturbances in the synchronization. BZ# 991337 Previously, the ptp4l application did not reset the announce receipt timer for ports in the PASSIVE state when an announce message was received. As a consequence, the port in the PASSIVE state was repeatedly switching between PASSIVE and MASTER states. This bug has been fixed and the timer is now correctly reset with every announce message. As a result, the port stays in the PASSIVE state until it stops receiving announce messages. BZ# 966787 Previously, the ptp4l and phc2sys utilities did not check if the command line arguments and the values specified in the configuration file were valid. As a consequence, the utilities could terminate unexpectedly. The utilities now check if the values are valid and if an invalid value is specified, the utilities no longer terminate unexpectedly and print an error message instead. Enhancement BZ# 977258 Occasionally, it is important that the system clock is not stepped, that is, not to interfere with other programs running on the system. Restarting the phc2sys application caused stepping of the clock. A new option has been added to phc2sys, and it is now possible to prevent phc2sys from stepping the clock. Users of linuxptp are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/linuxptp
Release notes for Eclipse Temurin 21.0.3
Release notes for Eclipse Temurin 21.0.3 Red Hat build of OpenJDK 21 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_eclipse_temurin_21.0.3/index
Chapter 13. Managing User and Host Groups
Chapter 13. Managing User and Host Groups 13.1. How User and Host Groups Work in IdM 13.1.1. What User and Host Groups Are A user group is a set of users with common privileges, password policies, and other characteristics. A host group is a set of IdM hosts with common access control rules and other characteristics. For example, you can define groups around company departments, physical locations, or access control requirements. 13.1.2. Supported Group Members A user group in IdM can include: IdM users other IdM user groups external users, which are users that exist outside IdM A host group in IdM can include: IdM servers and clients other IdM host groups 13.1.3. Direct and Indirect Group Members User and host group attributes in IdM apply to both direct and indirect members: when group B is a member of group A, all users in group B are considered members of group A. For example, in Figure 13.1, "Direct and Indirect Group Membership" : User 1 and User 2 are direct members of group A. User 3, User 4, and User 5 are indirect members of group A. Figure 13.1. Direct and Indirect Group Membership If you set a password policy for user group A, the policy applies to all users in user group B as well. Example 13.1. Viewing Direct and Indirect Group Members Create two groups: group_A and group_B . See Section 13.2, "Adding and Removing User or Host Groups" . Add: one user as a member of group_A another user as a member of group_B group_B as a member of group_A See Section 13.3, "Adding and Removing User or Host Group Members" . In the web UI: Select Identity Groups . From the individual group types which are listed in a side bar on the left, select User Groups , and click the name of group_A . Switch between Direct Membership and Indirect Membership . Figure 13.2. Indirect and Direct Members From the command line: Use the ipa group-show command: The list of indirect members does not include external users from trusted Active Directory domains. The Active Directory trust user objects are not visible in the IdM interface because they do not exist as LDAP objects within IdM. 13.1.4. User Group Types in IdM POSIX groups (the default) POSIX groups support POSIX attributes for their members. Note that groups that interact with Active Directory cannot use POSIX attributes. Non-POSIX groups All group members of this type of group must belong to the IdM domain. External groups External groups allow adding group members that exist in an identity store outside of the IdM domain. The external store can be a local system, an Active Directory domain, or a directory service. Non-POSIX and external groups do not support POSIX attributes. For example, these groups do not have a GID defined. Example 13.2. Searching for Different Types of User Groups Run the ipa group-find command to display all user groups. Run the ipa group-find --posix command to display all POSIX groups. Run the ipa group-find --nonposix command to display all non-POSIX groups. Run the ipa group-find --external command to display all external groups. 13.1.5. User and Host Groups Created by Default Table 13.1. User and Host Groups Created by Default Group Name User or Host Default Group Members ipausers User group All IdM users admins User group Users with administrative privileges, initially the default admin user editors User group Users allowed to edit other IdM users in the web UI, without all the rights of an administrative user trust admins User group Users with privileges to manage Active Directory trusts ipaservers Host group All IdM server hosts Adding a user to a user group applies the privileges and policies associated with the group. For example, adding a user to the admins group grants the user administrative privileges. Warning Do not delete the admins group. As admins is a pre-defined group required by IdM, this operation causes problems with certain commands. Warning Be careful when adding hosts to the ipaservers host group. All hosts in ipaservers have the ability to promote themselves to an IdM server. In addition, IdM creates user private groups by default whenever a new user is created in IdM. The user private group has the same name as the user for which it was created. The user is the only member of the user private group. GID of the private groups matches the UID of the user. Example 13.3. Viewing User Private Groups Run the ipa group-find --private command to display all user private groups: In some situations, it is better to avoid creating user private groups, such as when a NIS group or another system group already uses the GID that would be assigned to the user private group. See Section 13.4, "Disabling User Private Groups" .
[ "ipa group-show group_A Member users: user_1 Member groups: group_B Indirect Member users: user_2", "ipa group-find --private ---------------- 2 groups matched ---------------- Group name: user1 Description: User private group for user1 GID: 830400006 Group name: user2 Description: User private group for user2 GID: 830400004 ---------------------------- Number of entries returned 2 ----------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/chapter-user-groups
Chapter 13. Cephadm operations
Chapter 13. Cephadm operations As a storage administrator, you can carry out Cephadm operations in the Red Hat Ceph Storage cluster. 13.1. Monitor cephadm log messages Cephadm logs to the cephadm cluster log channel so you can monitor progress in real time. To monitor progress in realtime, run the following command: Example Example By default, the log displays info-level events and above. To see the debug-level messages, run the following commands: Example Return debugging level to default info : Example To see the recent events, run the following command: Example Theses events are also logged to ceph.cephadm.log file on the monitor hosts and to the monitor daemon's stderr 13.2. Ceph daemon logs You can view the Ceph daemon logs through stderr or files. Logging to stdout Traditionally, Ceph daemons have logged to /var/log/ceph . By default, Cephadm daemons log to stderr and the logs are captured by the container runtime environment. For most systems, by default, these logs are sent to journald and accessible through the journalctl command. For example, to view the logs for the daemon on host01 for a storage cluster with ID 5c5a50ae-272a-455d-99e9-32c6a013e694 : Example This works well for normal Cephadm operations when logging levels are low. To disable logging to stderr , set the following values: Example Logging to files You can also configure Ceph daemons to log to files instead of stderr . When logging to files, Ceph logs are located in /var/log/ceph/ CLUSTER_FSID . To enable logging to files, set the follwing values: Example Note Red Hat recommends disabling logging to stderr to avoid double logs. Important Currently log rotation to a non-default path is not supported. By default, Cephadm sets up log rotation on each host to rotate these files. You can configure the logging retention schedule by modifying /etc/logrotate.d/ceph. CLUSTER_FSID . 13.3. Data location Cephadm daemon data and logs are located in slightly different locations than the older versions of Ceph: /var/log/ceph/ CLUSTER_FSID contains all the storage cluster logs. Note that by default Cephadm logs through stderr and the container runtime, so these logs are usually not present. /var/lib/ceph/ CLUSTER_FSID contains all the cluster daemon data, besides logs. var/lib/ceph/ CLUSTER_FSID / DAEMON_NAME contains all the data for an specific daemon. /var/lib/ceph/ CLUSTER_FSID /crash contains the crash reports for the storage cluster. /var/lib/ceph/ CLUSTER_FSID /removed contains old daemon data directories for the stateful daemons, for example monitor or Prometheus, that have been removed by Cephadm. Disk usage A few Ceph daemons may store a significant amount of data in /var/lib/ceph , notably the monitors and Prometheus daemon, hence Red Hat recommends moving this directory to its own disk, partition, or logical volume so that the root file system is not filled up. 13.4. Cephadm custom config files Cephadm supports specifying miscellaneous configuration files for daemons. You must provide both the content of the configuration file and the location within the daemon's container where it should be mounted. A YAML spec is applied with custom config files specified. Cephadm redeploys the daemons for which the config files are specified. Then these files are mounted within the daemon's container at the specified location. You can apply a YAML spec with custom config files: Example You can mount the new config files within the containers for the daemons: Syntax Example
[ "ceph -W cephadm", "2022-06-10T17:51:36.335728+0000 mgr.Ceph5-1.nqikfh [INF] refreshing Ceph5-adm facts 2022-06-10T17:51:37.170982+0000 mgr.Ceph5-1.nqikfh [INF] deploying 1 monitor(s) instead of 2 so monitors may achieve consensus 2022-06-10T17:51:37.173487+0000 mgr.Ceph5-1.nqikfh [ERR] It is NOT safe to stop ['mon.Ceph5-adm']: not enough monitors would be available (Ceph5-2) after stopping mons [Ceph5-adm] 2022-06-10T17:51:37.174415+0000 mgr.Ceph5-1.nqikfh [INF] Checking pool \"nfs-ganesha\" exists for service nfs.foo 2022-06-10T17:51:37.176389+0000 mgr.Ceph5-1.nqikfh [ERR] Failed to apply nfs.foo spec NFSServiceSpec({'placement': PlacementSpec(count=1), 'service_type': 'nfs', 'service_id': 'foo', 'unmanaged': False, 'preview_only': False, 'pool': 'nfs-ganesha', 'namespace': 'nfs-ns'}): Cannot find pool \"nfs-ganesha\" for service nfs.foo Traceback (most recent call last): File \"/usr/share/ceph/mgr/cephadm/serve.py\", line 408, in _apply_all_services if self._apply_service(spec): File \"/usr/share/ceph/mgr/cephadm/serve.py\", line 509, in _apply_service config_func(spec) File \"/usr/share/ceph/mgr/cephadm/services/nfs.py\", line 23, in config self.mgr._check_pool_exists(spec.pool, spec.service_name()) File \"/usr/share/ceph/mgr/cephadm/module.py\", line 1840, in _check_pool_exists raise OrchestratorError(f'Cannot find pool \"{pool}\" for ' orchestrator._interface.OrchestratorError: Cannot find pool \"nfs-ganesha\" for service nfs.foo 2022-06-10T17:51:37.179658+0000 mgr.Ceph5-1.nqikfh [INF] Found osd claims -> {} 2022-06-10T17:51:37.180116+0000 mgr.Ceph5-1.nqikfh [INF] Found osd claims for drivegroup all-available-devices -> {} 2022-06-10T17:51:37.182138+0000 mgr.Ceph5-1.nqikfh [INF] Applying all-available-devices on host Ceph5-adm 2022-06-10T17:51:37.182987+0000 mgr.Ceph5-1.nqikfh [INF] Applying all-available-devices on host Ceph5-1 2022-06-10T17:51:37.183395+0000 mgr.Ceph5-1.nqikfh [INF] Applying all-available-devices on host Ceph5-2 2022-06-10T17:51:43.373570+0000 mgr.Ceph5-1.nqikfh [INF] Reconfiguring node-exporter.Ceph5-1 (unknown last config time) 2022-06-10T17:51:43.373840+0000 mgr.Ceph5-1.nqikfh [INF] Reconfiguring daemon node-exporter.Ceph5-1 on Ceph5-1", "ceph config set mgr mgr/cephadm/log_to_cluster_level debug ceph -W cephadm --watch-debug ceph -W cephadm --verbose", "ceph config set mgr mgr/cephadm/log_to_cluster_level info", "ceph log last cephadm", "journalctl -u ceph-5c5a50ae-272a-455d-99e9-32c6a013e694@host01", "ceph config set global log_to_stderr false ceph config set global mon_cluster_log_to_stderr false", "ceph config set global log_to_file true ceph config set global mon_cluster_log_to_file true", "service_type: grafana service_name: grafana custom_configs: - mount_path: /etc/example.conf content: | setting1 = value1 setting2 = value2 - mount_path: /usr/share/grafana/example.cert content: | -----BEGIN PRIVATE KEY----- V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWduYSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3MgZXQgYWNjdXNhbSBldCBqdXN0byBkdW8= -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWduYSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3MgZXQgYWNjdXNhbSBldCBqdXN0byBkdW8= -----END CERTIFICATE-----", "ceph orch redeploy SERVICE_NAME", "ceph orch redeploy grafana" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/administration_guide/cephadm-operations
Chapter 1. Preparing to install on Nutanix
Chapter 1. Preparing to install on Nutanix Before you install an OpenShift Container Platform cluster, be sure that your Nutanix environment meets the following requirements. 1.1. Nutanix version requirements You must install the OpenShift Container Platform cluster to a Nutanix environment that meets the following requirements. Table 1.1. Version requirements for Nutanix virtual environments Component Required version Nutanix AOS 6.5.2.7 or later Prism Central pc.2022.6 or later 1.2. Environment requirements Before you install an OpenShift Container Platform cluster, review the following Nutanix AOS environment requirements. 1.2.1. Infrastructure requirements You can install OpenShift Container Platform on on-premise Nutanix clusters, Nutanix Cloud Clusters (NC2) on Amazon Web Services (AWS), or NC2 on Microsoft Azure. For more information, see Nutanix Cloud Clusters on AWS and Nutanix Cloud Clusters on Microsoft Azure . 1.2.2. Required account privileges The installation program requires access to a Nutanix account with the necessary permissions to deploy the cluster and to maintain the daily operation of it. The following options are available to you: You can use a local Prism Central user account with administrative privileges. Using a local account is the quickest way to grant access to an account with the required permissions. If your organization's security policies require that you use a more restrictive set of permissions, use the permissions that are listed in the following table to create a custom Cloud Native role in Prism Central. You can then assign the role to a user account that is a member of a Prism Central authentication directory. Consider the following when managing this user account: When assigning entities to the role, ensure that the user can access only the Prism Element and subnet that are required to deploy the virtual machines. Ensure that the user is a member of the project to which it needs to assign virtual machines. For more information, see the Nutanix documentation about creating a Custom Cloud Native role , assigning a role , and adding a user to a project . Example 1.1. Required permissions for creating a Custom Cloud Native role Nutanix Object When required Required permissions in Nutanix API Description Categories Always Create_Category_Mapping Create_Or_Update_Name_Category Create_Or_Update_Value_Category Delete_Category_Mapping Delete_Name_Category Delete_Value_Category View_Category_Mapping View_Name_Category View_Value_Category Create, read, and delete categories that are assigned to the OpenShift Container Platform machines. Images Always Create_Image Delete_Image View_Image Create, read, and delete the operating system images used for the OpenShift Container Platform machines. Virtual Machines Always Create_Virtual_Machine Delete_Virtual_Machine View_Virtual_Machine Create, read, and delete the OpenShift Container Platform machines. Clusters Always View_Cluster View the Prism Element clusters that host the OpenShift Container Platform machines. Subnets Always View_Subnet View the subnets that host the OpenShift Container Platform machines. Projects If you will associate a project with compute machines, control plane machines, or all machines. View_Project View the projects defined in Prism Central and allow a project to be assigned to the OpenShift Container Platform machines. 1.2.3. Cluster limits Available resources vary between clusters. The number of possible clusters within a Nutanix environment is limited primarily by available storage space and any limitations associated with the resources that the cluster creates, and resources that you require to deploy the cluster, such a IP addresses and networks. 1.2.4. Cluster resources A minimum of 800 GB of storage is required to use a standard cluster. When you deploy a OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your Nutanix instance. Although these resources use 856 GB of storage, the bootstrap node is destroyed as part of the installation process. A standard OpenShift Container Platform installation creates the following resources: 1 label Virtual machines: 1 disk image 1 temporary bootstrap node 3 control plane nodes 3 compute machines 1.2.5. Networking requirements You must use either AHV IP Address Management (IPAM) or Dynamic Host Configuration Protocol (DHCP) for the network and ensure that it is configured to provide persistent IP addresses to the cluster machines. Additionally, create the following networking resources before you install the OpenShift Container Platform cluster: IP addresses DNS records Nutanix Flow Virtual Networking is supported for new cluster installations. To use this feature, enable Flow Virtual Networking on your AHV cluster before installing. For more information, see Flow Virtual Networking overview . Note It is recommended that each OpenShift Container Platform node in the cluster have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, an NTP server prevents errors typically associated with asynchronous server clocks. 1.2.5.1. Required IP Addresses An installer-provisioned installation requires two static virtual IP (VIP) addresses: A VIP address for the API is required. This address is used to access the cluster API. A VIP address for ingress is required. This address is used for cluster ingress traffic. You specify these IP addresses when you install the OpenShift Container Platform cluster. 1.2.5.2. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the Nutanix instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. If you use your own DNS or DHCP server, you must also create records for each node, including the bootstrap, control plane, and compute nodes. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.2. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 1.3. Configuring the Cloud Credential Operator utility The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). To install a cluster on Nutanix, you must set the CCO to manual mode as part of the installation process. To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Preparing to update a cluster with manually maintained credentials
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_nutanix/preparing-to-install-on-nutanix
7.9. Setting up a Standalone KRA or OCSP
7.9. Setting up a Standalone KRA or OCSP This section describes how to install a standalone KRA and OCSP. A standalone installation provides the flexibility to use a non-Certificate System CA to issue the certificates, because the CSRs generated during the installation are not automatically submitted to the CA and imported into the subsystem. Additionally, a KRA or an OCSP installed in standalone mode is not part of the CA's security domain, and the connector in the CA for key archival will not be configured. To install a standalone KRA or OCSP: Create a configuration file, such as /root/config.txt , with the following content: For a standalone KRA, add the following section to the configuration file: For a standalone OCSP, add the following section to the configuration file: To use an LDAPS connection to Directory Server running on the same host, add the following parameters to the DEFAULT section in the configuration file: Note For security reasons, Red Hat recommends using an encrypted connection to Directory Server. If you use a self-signed certificate in Directory Server, use the following command to export it from the Directory Server's Network Security Services (NSS) database: Proceed with the steps described in the section called "Starting the Installation of a Subsystem with an External CA" .
[ "[ DEFAULT ] pki_admin_password= password pki_client_database_password= password pki_client_pkcs12_password= password pki_ds_password= password pki_token_password= password pki_client_database_purge=False pki_security_domain_name= EXAMPLE pki_standalone=True pki_external_step_two=False", "[KRA] pki_admin_email= [email protected] pki_ds_base_dn= dc=kra,dc=example,dc=com pki_ds_database= kra pki_admin_nickname= kraadmin pki_audit_signing_nickname= kra_audit_signing pki_sslserver_nickname= sslserver pki_storage_nickname= kra_storage pki_subsystem_nickname= subsystem pki_transport_nickname= kra_transport pki_standalone=True", "[OCSP] pki_admin_email= [email protected] pki_ds_base_dn= dc=ocsp,dc=example,dc=com pki_ds_database= ocsp pki_admin_nickname= ocspadmin pki_audit_signing_nickname= ocsp_audit_signing pki_ocsp_signing_nickname= ocsp_signing pki_sslserver_nickname= sslserver pki_subsystem_nickname= subsystem pki_standalone=True", "pki_ds_secure_connection=True pki_ds_secure_connection_ca_pem_file= path_to_CA_or_self-signed_certificate", "certutil -L -d /etc/dirsrv/slapd- instance_name / -n \"server-cert\" -a -o /root/ds.crt" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/setting_up_a_standalone_kra_or_ocsp
2.8. XML Document Generation
2.8. XML Document Generation Important The features in this section are deprecated. They will no longer be supported in a future release. 2.8.1. XML Document Generation XML documents can be constructed dynamically using XML Document Models. A document model is generally created from a schema. The document model is bound to relevant SQL statements through mapping classes. See the Red Hat JBoss Data Virtualization User Guide for more information about creating document models. Querying XML documents is similar to querying relational tables. An idiomatic SQL variant with special scalar functions provides control over which parts of a given document to return. Note XML documents may also be created via XQuery with the XMLQuery function or with various other SQL/XML functions. 2.8.2. XML SELECT Command A valid XML SELECT Command against a document model is of the form: SELECT ... FROM ... [WHERE ...] [ORDER BY ...] The use of any other SELECT clause is not allowed. The fully qualified name for an XML element is: The fully qualified name for an attribute is: Partially qualified names for elements and attributes can be used as long as the partial name is unique. 2.8.3. XML SELECT: FROM Clause This clause specifies the document to generate. Document names resemble other virtual groups - "model"."document name" . Syntax Rules: The FROM clause must contain only one unary clause specifying the desired document. 2.8.4. XML SELECT: SELECT Clause The SELECT clause determines which parts of the XML document are generated for output. Example Syntax: select * from model.doc select model.doc.root.parent.element.* from model.doc select element, element1.@attribute from model.doc Syntax Rules: SELECT * and SELECT "xml" are equivalent and specify that every element and attribute of the document should be output. The SELECT clause of an XML Query may only contain *, "xml", or element and attribute references from the specified document. Any other expressions are not allowed. If the SELECT clause contains an element or attribute reference (other than * or "xml") then only the specified elements, attributes, and their ancestor elements will be in the generated document. element.* specifies that the element, its attribute, and all child content should be output. 2.8.5. XML SELECT: WHERE Clause The WHERE clause specifies how to filter content from the generated document based upon values contained in the underlying mapping classes. Most predicates are valid in an XML SELECT Command, however combining value references from different parts of the document may not always be allowed. Criteria is logically applied to a context which is directly related to a mapping class. Starting with the root mapping class, there is a root context that describes all of the top level repeated elements that will be in the output document. Criteria applied to the root or any other context will change the related mapping class query to apply the affects of the criteria, which can include checking values from any of the descendant mapping classes. Example Syntax: select element, element1.@attribute from model.doc where element1.@attribute = 1 select element, element1.@attribute from model.doc where context(element1, element1.@attribute) = 1 Syntax Rules: Each criteria conjunct must refer to a single context and can be criteria that applies to a mapping class, contain a rowlimit function, or contain rowlimitexception function. Refer to Section 2.8.9, "ROWLIMIT Function" and Section 2.8.10, "ROWLIMITEXCEPTION Function" . Criteria that applies to a mapping class is associated to that mapping class using the context function. The absence of a context function implies the criteria applies to the root context. Refer to Section 2.8.8, "CONTEXT Function" . At a given context the criteria can span multiple mapping classes provided that all mapping classes involved are either parents of the context, the context itself, or a descendant of the context. Note Implied root context user criteria against a document model with sibling root mapping classes is not generally semantically correct. It is applied as if each of the conjuncts is applied to only a single root mapping class. This behavior is the same as prior releases but may be fixed in a future release. 2.8.6. XML SELECT: ORDER BY Clause The XML SELECT Command ORDER BY clause specifies ordering for the referenced mapping class queries. Syntax Rules: Each ORDER BY item must be an element or attribute reference tied a output value from a mapping class. The order of the ORDER BY items is the relative order applied to their respective mapping classes. 2.8.7. XML SELECT Command Specific Functions XML SELECT Command functions resemble scalar functions, but act as hints in the WHERE clause: CONTEXT Function ROWLIMIT Function ROWLIMITEXCEPTION Function These functions are only valid in an XML SELECT Command. 2.8.8. CONTEXT Function This function selects the context for the containing conjunct. CONTEXT(arg1, arg2) Syntax Rules: Context functions apply to the whole conjunct. The first argument must be an element or attribute reference from the mapping class whose context the criteria conjunct will apply to. The second parameter is the return value for the function. 2.8.9. ROWLIMIT Function This function limits the rows processed for the given context. ROWLIMIT(arg) Syntax Rules: The first argument must be an element or attribute reference from the mapping class whose context the row limit applies. The ROWLIMIT function must be used in equality comparison criteria with the right hand expression equal to an positive integer number or rows to limit. Only one row limit or row limit exception may apply to a given context. 2.8.10. ROWLIMITEXCEPTION Function This function limits the rows processed for the given context and throws an exception if the given number of rows is exceeded. ROWLIMITEXCEPTION(arg) Syntax Rules: The first argument must be an element or attribute reference from the mapping class whose context the row limit exception applies. The ROWLIMITEXCEPTION function must be used in equality comparison criteria with the right hand expression equal to an positive integer number or rows to limit. Only one row limit or row limit exception may apply to a given context. 2.8.11. Document Generation Document generation starts with the root mapping class and proceeds iteratively and hierarchically over all of the child mapping classes. This can result in a large number of query executions. For example if a document has a root mapping class with 3 child mapping classes. Then for each row selected by the root mapping class after the application of the root context criteria, each of the child mapping classes queries will also be executed. Note By default, XML generated by XML documents are not checked for correctness vs. the relevant schema. It is possible that the mapping class queries, the usage of specific SELECT or WHERE clause values will generate a document that is not valid with respect to the schema. Refer to Section 2.8.12, "Document Validation" for information to ensure correctness. Sibling or cousin elements defined by the same mapping class that do not have a common parent in that mapping class will be treated as independent mapping classes during planning and execution. This allows for a more document-centric approach when applying WHERE criteria and ORDER BY clauses to mapping classes. 2.8.12. Document Validation If the execution property XMLValidation is set to 'true' generated documents will be checked for correctness. However, correctness checking will not prevent invalid documents from being generated, since correctness is checked after generation.
[ "SELECT ... FROM ... [WHERE ...] [ORDER BY ...]", "\"model\".\"document name\".[path to element].\"element name\"", "\"model\".\"document name\".[path to element].\"element name\".[@]\"attribute name\"" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-XML_Document_Generation
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request. Important Disclaimer : Links contained in this document to external websites are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
null
https://docs.redhat.com/en/documentation/ansible_on_clouds/2.x/html/red_hat_ansible_automation_platform_service_on_aws/providing-feedback
Chapter 5. Installing a cluster with RHEL KVM on IBM Z and IBM(R) LinuxONE in a restricted network
Chapter 5. Installing a cluster with RHEL KVM on IBM Z and IBM(R) LinuxONE in a restricted network In OpenShift Container Platform version 4.12, you can install a cluster on IBM Z or IBM(R) LinuxONE infrastructure that you provision in a restricted network. Note While this document refers to only IBM Z, all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. You must move or remove any existing installation files, before you begin the installation process. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are done from a machine with access to the installation media. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. You provisioned a RHEL Kernel Virtual Machine (KVM) system that is hosted on the logical partition (LPAR) and based on RHEL 8.4 or later. See Red Hat Enterprise Linux 8 and 9 Life Cycle . 5.2. About installations in restricted networks In OpenShift Container Platform 4.12, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 5.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 5.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.4. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. One or more KVM host machines based on RHEL 8.4 or later. Each RHEL KVM host machine must have libvirt installed and running. The virtual machines are provisioned under each RHEL KVM host machine. 5.4.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: Table 5.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different RHEL instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. See Red Hat Enterprise Linux technology capabilities and limits . 5.4.2. Network connectivity requirements The OpenShift Container Platform installer creates the Ignition files, which are necessary for all the Red Hat Enterprise Linux CoreOS (RHCOS) virtual machines. The automated installation of OpenShift Container Platform is performed by the bootstrap machine. It starts the installation of OpenShift Container Platform on each node, starts the Kubernetes cluster, and then finishes. During this bootstrap, the virtual machine must have an established network connection either through a Dynamic Host Configuration Protocol (DHCP) server or static IP address. 5.4.3. IBM Z network connectivity requirements To install on IBM Z under RHEL KVM, you need: A RHEL KVM host configured with an OSA or RoCE network adapter. Either a RHEL KVM host that is configured to use bridged networking in libvirt or MacVTap to connect the network to the guests. See Types of virtual network connections . 5.4.4. Host machine resource requirements The RHEL KVM host in your environment must meet the following requirements to host the virtual machines that you plan for the OpenShift Container Platform environment. See Getting started with virtualization . You can install OpenShift Container Platform version 4.12 on the following IBM hardware: IBM z16 (all models), IBM z15 (all models), IBM z14 (all models), IBM z13, and IBM z13s IBM(R) LinuxONE Emperor 4, IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II, IBM(R) LinuxONE Emperor, and IBM(R) LinuxONE Rockhopper Note Support for RHCOS functionality for IBM z13 all models, IBM(R) LinuxONE Emperor, and IBM(R) LinuxONE Rockhopper is deprecated. These hardware models remain fully supported in OpenShift Container Platform 4.12. However, Red Hat recommends that you use later hardware models. 5.4.5. Minimum IBM Z system environment Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One LPAR running on RHEL 8.4 or later with KVM, which is managed by libvirt On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine 5.4.6. Minimum resource requirements Each cluster virtual machine must meet the following minimum requirements: Virtual Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 5.4.7. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Operating system requirements For high availability, two or three LPARs running on RHEL 8.4 or later with KVM, which are managed by libvirt. On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, distributed across the RHEL KVM host machines. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the RHEL KVM host machines. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using cpu_shares . Do the same for infrastructure nodes, if they exist. See schedinfo in IBM Documentation. 5.4.8. Preferred resource requirements The preferred requirements for each cluster virtual machine are: Virtual Machine Operating System vCPU Virtual RAM Storage Bootstrap RHCOS 4 16 GB 120 GB Control plane RHCOS 8 16 GB 120 GB Compute RHCOS 6 8 GB 120 GB 5.4.9. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources Recommended host practices for IBM Z & IBM(R) LinuxONE environments 5.4.10. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 5.4.10.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 5.4.10.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 5.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 5.3. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 5.4. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 5.4.11. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 5.5. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 5.4.11.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 5.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 5.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 5.4.12. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 5.6. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 5.7. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 5.4.12.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 5.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 5.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Choose to perform either a fast track installation of Red Hat Enterprise Linux CoreOS (RHCOS) or a full installation of Red Hat Enterprise Linux CoreOS (RHCOS). For the full installation, you must set up an HTTP or HTTPS server to provide Ignition files and install images to the cluster nodes. For the fast track installation an HTTP or HTTPS server is not required, however, a DHCP server is required. See sections "Fast-track installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines" and "Full installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines". Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 5.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 5.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z 5.8.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. 18 Provide the imageContentSources section from the output of the command to mirror the repository. 5.8.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.8.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 5.9. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 5.9.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 5.8. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 5.9. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 5.10. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 5.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 5.12. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 5.13. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. Note In OpenShift Container Platform {version}, egress IP is only assigned to the primary interface. Consequentially, setting routingViaHost to true will not work for egress IP in OpenShift Container Platform {version}. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 5.14. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 5.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 5.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) as Red Hat Enterprise Linux (RHEL) guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. You can perform a fast-track installation of RHCOS that uses a prepackaged QEMU copy-on-write (QCOW2) disk image. Alternatively, you can perform a full installation on a new QCOW2 disk image. To add further security to your system, you can optionally install RHCOS using IBM Secure Execution before proceeding to the fast-track installation. 5.11.1. Installing RHCOS using IBM Secure Execution Before you install RHCOS using IBM Secure Execution, you must prepare the underlying infrastructure. Important Installing RHCOS using IBM Secure Execution is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites IBM z15 or later, or IBM(R) LinuxONE III or later. Red Hat Enterprise Linux (RHEL) 8 or later. You have a bootstrap Ignition file. The file is not protected, enabling others to view and edit it. You have verified that the boot image has not been altered after installation. You must run all your nodes as IBM Secure Execution guests. Procedure Prepare your RHEL KVM host to support IBM Secure Execution. By default, KVM hosts do not support guests in IBM Secure Execution mode. To support guests in IBM Secure Execution mode, KVM hosts must boot in LPAR mode with the kernel parameter specification prot_virt=1 . To enable prot_virt=1 on RHEL 8, follow these steps: Navigate to /boot/loader/entries/ to modify your bootloader configuration file *.conf . Add the kernel command line parameter prot_virt=1 . Run the zipl command and reboot your system. KVM hosts that successfully start with support for IBM Secure Execution for Linux issue the following kernel message: prot_virt: Reserving <amount>MB as ultravisor base storage. To verify that the KVM host now supports IBM Secure Execution, run the following command: # cat /sys/firmware/uv/prot_virt_host Example output 1 The value of this attribute is 1 for Linux instances that detect their environment as consistent with that of a secure host. For other instances, the value is 0. Add your host keys to the KVM guest via Ignition. During the first boot, RHCOS looks for your host keys to re-encrypt itself with them. RHCOS searches for files starting with ibm-z-hostkey- in the /etc/se-hostkeys directory. All host keys, for each machine the cluster is running on, must be loaded into the directory by the administrator. After first boot, you cannot run the VM on any other machines. Note You need to prepare your Ignition file on a safe system. For example, another IBM Secure Execution guest. For example: { "ignition": { "version": "3.0.0" }, "storage": { "files": [ { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 }, { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 } ] } } ``` Note You can add as many host keys as required if you want your node to be able to run on multiple IBM Z machines. To generate the Base64 encoded string, run the following command: base64 <your-hostkey>.crt Compared to guests not running IBM Secure Execution, the first boot of the machine is longer because the entire image is encrypted with a randomly generated LUKS passphrase before the Ignition phase. Follow the fast-track installation procedure to install nodes using the IBM Secure Exection QCOW image. Additional resources Introducing IBM Secure Execution for Linux Linux as an IBM Secure Execution host or guest 5.11.2. Fast-track installation by using a prepackaged QCOW2 disk image Complete the following steps to create the machines in a fast-track installation of Red Hat Enterprise Linux CoreOS (RHCOS), importing a prepackaged Red Hat Enterprise Linux CoreOS (RHCOS) QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.4 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. A DHCP server that provides IP addresses. Procedure Obtain the RHEL QEMU copy-on-write (QCOW2) disk image file from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. Download the QCOW2 disk image and Ignition files to a common directory on the RHEL KVM host. For example: /var/lib/libvirt/images Note The Ignition files are generated by the OpenShift Container Platform installer. Create a new disk image with the QCOW2 disk image backing file for each KVM guest node. USD qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size} Create the new KVM guest nodes using the Ignition file and the new disk image. USD virt-install --noautoconsole \ --connect qemu:///system \ --name {vn_name} \ --memory {memory} \ --vcpus {vcpus} \ --disk {disk} \ --import \ --network network={network},mac={mac} \ --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 5.11.3. Full installation on a new QCOW2 disk image Complete the following steps to create the machines in a full installation on a new QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.4 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. An HTTP or HTTPS server is set up. Procedure Obtain the RHEL kernel, initramfs, and rootfs files from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Move the downloaded RHEL live kernel, initramfs, and rootfs as well as the Ignition files to an HTTP or HTTPS server before you launch virt-install . Note The Ignition files are generated by the OpenShift Container Platform installer. Create the new KVM guest nodes using the RHEL kernel, initramfs, and Ignition files, the new disk image, and adjusted parm line arguments. For --location , specify the location of the kernel/initrd on the HTTP or HTTPS server. For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. USD virt-install \ --connect qemu:///system \ --name {vn_name} \ --vcpus {vcpus} \ --memory {memory_mb} \ --disk {vn_name}.qcow2,size={image_size| default(10,true)} \ --network network={virt_network_parm} \ --boot hd \ --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} \ --extra-args "rd.neednet=1 coreos.inst=yes coreos.inst.install_dev=vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vn_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}" \ --noautoconsole \ --wait 5.11.4. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 5.11.4.1. Networking options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 5.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 5.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 5.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 5.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Configure the Operators that are not available. 5.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 5.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 5.15.2.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 5.15.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 5.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. Additional resources How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH . 5.17. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "prot_virt: Reserving <amount>MB as ultravisor base storage.", "cat /sys/firmware/uv/prot_virt_host", "1", "{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```", "base64 <your-hostkey>.crt", "qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}", "virt-install --noautoconsole --connect qemu:///system --name {vn_name} --memory {memory} --vcpus {vcpus} --disk {disk} --import --network network={network},mac={mac} --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional", "virt-install --connect qemu:///system --name {vn_name} --vcpus {vcpus} --memory {memory_mb} --disk {vn_name}.qcow2,size={image_size| default(10,true)} --network network={virt_network_parm} --boot hd --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} --extra-args \"rd.neednet=1 coreos.inst=yes coreos.inst.install_dev=vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vn_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}\" --noautoconsole --wait", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_ibm_z_and_ibm_linuxone/installing-restricted-networks-ibm-z-kvm
Chapter 4. Post-installation cluster tasks
Chapter 4. Post-installation cluster tasks After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements. 4.1. Available cluster customizations You complete most of the cluster configuration and customization after you deploy your OpenShift Container Platform cluster. A number of configuration resources are available. Note If you install your cluster on IBM Z, not all features and functions are available. You modify the configuration resources to configure the major features of the cluster, such as the image registry, networking configuration, image build behavior, and the identity provider. For current documentation of the settings that you control by using these resources, use the oc explain command, for example oc explain builds --api-version=config.openshift.io/v1 4.1.1. Cluster configuration resources All cluster configuration resources are globally scoped (not namespaced) and named cluster . Resource name Description apiserver.config.openshift.io Provides API server configuration such as certificates and certificate authorities . authentication.config.openshift.io Controls the identity provider and authentication configuration for the cluster. build.config.openshift.io Controls default and enforced configuration for all builds on the cluster. console.config.openshift.io Configures the behavior of the web console interface, including the logout behavior . featuregate.config.openshift.io Enables FeatureGates so that you can use Tech Preview features. image.config.openshift.io Configures how specific image registries should be treated (allowed, disallowed, insecure, CA details). ingress.config.openshift.io Configuration details related to routing such as the default domain for routes. oauth.config.openshift.io Configures identity providers and other behavior related to internal OAuth server flows. project.config.openshift.io Configures how projects are created including the project template. proxy.config.openshift.io Defines proxies to be used by components needing external network access. Note: not all components currently consume this value. scheduler.config.openshift.io Configures scheduler behavior such as policies and default node selectors. 4.1.2. Operator configuration resources These configuration resources are cluster-scoped instances, named cluster , which control the behavior of a specific component as owned by a particular Operator. Resource name Description consoles.operator.openshift.io Controls console appearance such as branding customizations config.imageregistry.operator.openshift.io Configures internal image registry settings such as public routing, log levels, proxy settings, resource constraints, replica counts, and storage type. config.samples.operator.openshift.io Configures the Samples Operator to control which example image streams and templates are installed on the cluster. 4.1.3. Additional configuration resources These configuration resources represent a single instance of a particular component. In some cases, you can request multiple instances by creating multiple instances of the resource. In other cases, the Operator can use only a specific resource instance name in a specific namespace. Reference the component-specific documentation for details on how and when you can create additional resource instances. Resource name Instance name Namespace Description alertmanager.monitoring.coreos.com main openshift-monitoring Controls the Alertmanager deployment parameters. ingresscontroller.operator.openshift.io default openshift-ingress-operator Configures Ingress Operator behavior such as domain, number of replicas, certificates, and controller placement. 4.1.4. Informational Resources You use these resources to retrieve information about the cluster. Some configurations might require you to edit these resources directly. Resource name Instance name Description clusterversion.config.openshift.io version In OpenShift Container Platform 4.7, you must not customize the ClusterVersion resource for production clusters. Instead, follow the process to update a cluster . dns.config.openshift.io cluster You cannot modify the DNS settings for your cluster. You can view the DNS Operator status . infrastructure.config.openshift.io cluster Configuration details allowing the cluster to interact with its cloud provider. network.config.openshift.io cluster You cannot modify your cluster networking after installation. To customize your network, follow the process to customize networking during installation . 4.2. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. Warning Cluster resources must adjust to the new pull secret, which can temporarily limit the usability of the cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. 4.3. Adjust worker nodes If you incorrectly sized the worker nodes during deployment, adjust them by creating one or more new machine sets, scale them up, then scale the original machine set down before removing them. 4.3.1. Understanding the difference between machine sets and the machine config pool MachineSet objects describe OpenShift Container Platform nodes with respect to the cloud or machine provider. The MachineConfigPool object allows MachineConfigController components to define and provide the status of machines in the context of upgrades. The MachineConfigPool object allows users to configure how upgrades are rolled out to the OpenShift Container Platform nodes in the machine config pool. The NodeSelector object can be replaced with a reference to the MachineSet object. 4.3.2. Scaling a machine set manually To add or remove an instance of a machine in a machine set, you can manually scale the machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the machine sets that are in the cluster: USD oc get machinesets -n openshift-machine-api The machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the machines that are in the cluster: USD oc get machine -n openshift-machine-api Set the annotation on the machine that you want to delete: USD oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine="true" Cordon and drain the node that you want to delete: USD oc adm cordon <node_name> USD oc adm drain <node_name> Scale the machine set: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api You can scale the machine set up or down. It takes several minutes for the new machines to be available. Verification Verify the deletion of the intended machine: USD oc get machines 4.3.3. The machine set deletion policy Random , Newest , and Oldest are the three supported deletion options. The default is Random , meaning that random machines are chosen and deleted when scaling machine sets down. The deletion policy can be set according to the use case by modifying the particular machine set: spec: deletePolicy: <delete_policy> replicas: <desired_replica_count> Specific machines can also be prioritized for deletion by adding the annotation machine.openshift.io/cluster-api-delete-machine=true to the machine of interest, regardless of the deletion policy. Important By default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker machine set to 0 unless you first relocate the router pods. Note Custom machine sets can be used for use cases requiring that services run on specific nodes and that those services are ignored by the controller when the worker machine sets are scaling down. This prevents service disruption. 4.3.4. Creating default cluster-wide node selectors You can use default cluster-wide node selectors on pods together with labels on nodes to constrain all pods created in a cluster to specific nodes. With cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. You configure cluster-wide node selectors by editing the Scheduler Operator custom resource (CR). You add labels to a node, a machine set, or a machine config. Adding the label to the machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. Note You can add additional key/value pairs to a pod. But you cannot add a different value for a default key. Procedure To add a default cluster-wide node selector: Edit the Scheduler Operator CR to add the default cluster-wide node selectors: USD oc edit scheduler cluster Example Scheduler Operator CR with a node selector apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false policy: name: "" 1 Add a node selector with the appropriate <key>:<value> pairs. After making this change, wait for the pods in the openshift-kube-apiserver project to redeploy. This can take several minutes. The default cluster-wide node selector does not take effect until the pods redeploy. Add labels to a node by using a machine set or editing the node directly: Use a machine set to add labels to nodes managed by the machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api 1 1 Add a <key>/<value> pair for each label. For example: USD oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node Redeploy the nodes associated with that machine set by scaling down to 0 and scaling up the nodes: For example: USD oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api USD oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.18.3+002a51f Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east Verify that the labels are added to the node using the oc get command: USD oc get nodes -l <key>=<value>,<key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.18.3+002a51f 4.4. Creating infrastructure machine sets for production environments You can create a machine set to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. For information on infrastructure nodes and which components can run on infrastructure nodes, see Creating infrastructure machine sets . For sample machine sets that you can use with these procedures, see Creating machine sets for different clouds . 4.4.1. Creating a machine set In addition to the ones created by the installation program, you can create your own machine sets to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. If you are not sure which value to set for a specific field, you can check an existing machine set from your cluster: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m Check values of a specific machine set: USD oc get machineset <machineset_name> -n \ openshift-machine-api -o yaml Example output ... template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker 2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a 1 The cluster ID. 2 A default node label. Create the new MachineSet CR: USD oc create -f <file_name>.yaml View the list of machine sets: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again. 4.4.2. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes (also known as the master nodes) are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1 ... 1 This example node selector deploys pods on nodes in the us-east-1 region by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources For information on how to configure project node selectors to avoid cluster-wide node selector key conflicts, see Project node selectors . 4.4.3. Creating a machine config pool for infrastructure machines If you need infrastructure machines to have dedicated configurations, you must create an infra pool. Procedure Add a label to the node you want to assign as the infra node with a specific label: USD oc label node <node_name> <label> USD oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra= Create a machine config pool that contains both the worker role and your custom role as machine config selector: USD cat infra.mcp.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" 2 1 Add the worker role and your custom role. 2 Add the label you added to the node as a nodeSelector . Note Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool. After you have the YAML file, you can create the machine config pool: USD oc create -f infra.mcp.yaml Check the machine configs to ensure that the infrastructure configuration rendered successfully: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d You should see a new machine config, with the rendered-infra-* prefix. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra . Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes. Note After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration. Create a machine config: USD cat infra.mc.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra 1 Add the label you added to the node as a nodeSelector . Apply the machine config to the infra-labeled nodes: USD oc create -f infra.mc.yaml Confirm that your new machine config pool is available: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m In this example, a worker node was changed to an infra node. Additional resources See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool. 4.5. Assigning machine set resources to infrastructure nodes After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied. However, when an infra node is assigned the worker role, there is a chance that user workloads can get assigned inadvertently to the infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods that you want to control. 4.5.1. Binding infrastructure node workloads using taints and tolerations If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it. Important It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions. Prerequisites Configure additional MachineSet objects in your OpenShift Container Platform cluster. Procedure Add a taint to the infra node to prevent scheduling user workloads on it: Determine if the node has the taint: USD oc describe nodes <node_name> Sample output oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker ... Taints: node-role.kubernetes.io/infra:NoSchedule ... This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the step. If you have not configured a taint to prevent scheduling user workloads on it: USD oc adm taint nodes <node_name> <key>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra:NoSchedule This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule . Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node. Note If a descheduler is used, pods violating node taints could be evicted from the cluster. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification: tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3 1 Specify the effect that you added to the node. 2 Specify the key that you added to the node. 3 Specify the Exists Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node. Note Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details. Additional resources See Controlling pod placement using the scheduler for general information on scheduling a pod to a node. 4.6. Moving resources to infrastructure machine sets Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created. 4.6.1. Moving the router You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the IngressController custom resource for the router Operator: USD oc get ingresscontroller default -n openshift-ingress-operator -o yaml The command output resembles the following text: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default Edit the ingresscontroller resource and change the nodeSelector to use the infra label: USD oc edit ingresscontroller default -n openshift-ingress-operator Add the nodeSelector stanza that references the infra label to the spec section, as shown: spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" Confirm that the router pod is running on the infra node. View the list of router pods and note the node name of the running pod: USD oc get pod -n openshift-ingress -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none> In this example, the running pod is on the ip-10-0-217-226.ec2.internal node. View the node status of the running pod: USD oc get node <node_name> 1 1 Specify the <node_name> that you obtained from the pod list. Example output NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.20.0 Because the role list includes infra , the pod is running on the correct node. 4.6.2. Moving the default registry You configure the registry Operator to deploy its pods to different nodes. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the config/instance object: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ... Edit the config/instance object: USD oc edit configs.imageregistry.operator.openshift.io/cluster Modify the spec section of the object to resemble the following YAML: spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: node-role.kubernetes.io/infra: "" Verify the registry pod has been moved to the infrastructure node. Run the following command to identify the node where the registry pod is located: USD oc get pods -o wide -n openshift-image-registry Confirm the node has the label you specified: USD oc describe node <node_name> Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list. 4.6.3. Moving the monitoring solution By default, the Prometheus Cluster Monitoring stack, which contains Prometheus, Grafana, and AlertManager, is deployed to provide cluster monitoring. It is managed by the Cluster Monitoring Operator. To move its components to different machines, you create and apply a custom config map. Procedure Save the following ConfigMap definition as the cluster-monitoring-configmap.yaml file: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: "" prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" grafana: nodeSelector: node-role.kubernetes.io/infra: "" k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" Running this config map forces the components of the monitoring stack to redeploy to infrastructure nodes. Apply the new config map: USD oc create -f cluster-monitoring-configmap.yaml Watch the monitoring pods move to the new machines: USD watch 'oc get pod -n openshift-monitoring -o wide' If a component has not moved to the infra node, delete the pod with this component: USD oc delete pod -n openshift-monitoring <pod> The component from the deleted pod is re-created on the infra node. 4.6.4. Moving OpenShift Logging resources You can configure the Cluster Logging Operator to deploy the pods for OpenShift Logging components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location. For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements. Prerequisites OpenShift Logging and Elasticsearch must be installed. These features are not installed by default. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana ... 1 2 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. Verification To verify that a component has moved, you can use the oc get pod -o wide command. For example: You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node: USD oc get pod kibana-5b8bdf44f9-ccpq9 -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none> You want to move the Kibana Pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.20.0 Note that the node has a node-role.kubernetes.io/infra: '' label: USD oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml Example output kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: '' ... To move the Kibana pod, edit the ClusterLogging CR to add a node selector: apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: ... visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana 1 Add a node selector to match the label in the node specification. After you save the CR, the current Kibana pod is terminated and new pod is deployed: USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node: USD oc get pod kibana-7d85dcffc8-bfpfp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none> After a few moments, the original Kibana pod is removed. USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s 4.7. About the cluster autoscaler The cluster autoscaler adjusts the size of an OpenShift Container Platform cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments to provide infrastructure management that does not rely on objects of a specific cloud provider. The cluster autoscaler has a cluster scope, and is not associated with a particular namespace. The cluster autoscaler increases the size of the cluster when there are pods that fail to schedule on any of the current worker nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify. The cluster autoscaler computes the total memory, CPU, and GPU on all nodes the cluster, even though it does not manage the control plane nodes. These values are not single-machine oriented. They are an aggregation of all the resources in the entire cluster. For example, if you set the maximum memory resource limit, the cluster autoscaler includes all the nodes in the cluster when calculating the current memory usage. That calculation is then used to determine if the cluster autoscaler has the capacity to add more worker resources. Important Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition that you create is large enough to account for the total possible number of machines in your cluster. This value must encompass the number of control plane machines and the possible number of compute machines that you might scale to. Every 10 seconds, the cluster autoscaler checks which nodes are unnecessary in the cluster and removes them. The cluster autoscaler considers a node for removal if the following conditions apply: The sum of CPU and memory requests of all pods running on the node is less than 50% of the allocated resources on the node. The cluster autoscaler can move all pods running on the node to the other nodes. The cluster autoscaler does not have scale down disabled annotation. If the following types of pods are present on a node, the cluster autoscaler will not remove the node: Pods with restrictive pod disruption budgets (PDBs). Kube-system pods that do not run on the node by default. Kube-system pods that do not have a PDB or have a PDB that is too restrictive. Pods that are not backed by a controller object such as a deployment, replica set, or stateful set. Pods with local storage. Pods that cannot be moved elsewhere because of a lack of resources, incompatible node selectors or affinity, matching anti-affinity, and so on. Unless they also have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "true" annotation, pods that have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" annotation. For example, you set the maximum CPU limit to 64 cores and configure the cluster autoscaler to only create machines that have 8 cores each. If your cluster starts with 30 cores, the cluster autoscaler can add up to 4 more nodes with 32 cores, for a total of 62. If you configure the cluster autoscaler, additional usage restrictions apply: Do not modify the nodes that are in autoscaled node groups directly. All nodes within the same node group have the same capacity and labels and run the same system pods. Specify requests for your pods. If you have to prevent pods from being deleted too quickly, configure appropriate PDBs. Confirm that your cloud provider quota is large enough to support the maximum node pools that you configure. Do not run additional node group autoscalers, especially the ones offered by your cloud provider. The horizontal pod autoscaler (HPA) and the cluster autoscaler modify cluster resources in different ways. The HPA changes the deployment's or replica set's number of replicas based on the current CPU load. If the load increases, the HPA creates new replicas, regardless of the amount of resources available to the cluster. If there are not enough resources, the cluster autoscaler adds resources so that the HPA-created pods can run. If the load decreases, the HPA stops some replicas. If this action causes some nodes to be underutilized or completely empty, the cluster autoscaler deletes the unnecessary nodes. The cluster autoscaler takes pod priorities into account. The Pod Priority and Preemption feature enables scheduling pods based on priorities if the cluster does not have enough resources, but the cluster autoscaler ensures that the cluster has resources to run all pods. To honor the intention of both features, the cluster autoscaler includes a priority cutoff function. You can use this cutoff to schedule "best-effort" pods, which do not cause the cluster autoscaler to increase resources but instead run only when spare resources are available. Pods with priority lower than the cutoff value do not cause the cluster to scale up or prevent the cluster from scaling down. No new nodes are added to run the pods, and nodes running these pods might be deleted to free resources. 4.7.1. ClusterAutoscaler resource definition This ClusterAutoscaler resource definition shows the parameters and sample values for the cluster autoscaler. apiVersion: "autoscaling.openshift.io/v1" kind: "ClusterAutoscaler" metadata: name: "default" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15 1 Specify the priority that a pod must exceed to cause the cluster autoscaler to deploy additional nodes. Enter a 32-bit integer value. The podPriorityThreshold value is compared to the value of the PriorityClass that you assign to each pod. 2 Specify the maximum number of nodes to deploy. This value is the total number of machines that are deployed in your cluster, not just the ones that the autoscaler controls. Ensure that this value is large enough to account for all of your control plane and compute machines and the total number of replicas that you specify in your MachineAutoscaler resources. 3 Specify the minimum number of cores to deploy in the cluster. 4 Specify the maximum number of cores to deploy in the cluster. 5 Specify the minimum amount of memory, in GiB, in the cluster. 6 Specify the maximum amount of memory, in GiB, in the cluster. 7 Optionally, specify the type of GPU node to deploy. Only nvidia.com/gpu and amd.com/gpu are valid types. 8 Specify the minimum number of GPUs to deploy in the cluster. 9 Specify the maximum number of GPUs to deploy in the cluster. 10 In this section, you can specify the period to wait for each action by using any valid ParseDuration interval, including ns , us , ms , s , m , and h . 11 Specify whether the cluster autoscaler can remove unnecessary nodes. 12 Optionally, specify the period to wait before deleting a node after a node has recently been added . If you do not specify a value, the default value of 10m is used. 13 Specify the period to wait before deleting a node after a node has recently been deleted . If you do not specify a value, the default value of 10s is used. 14 Specify the period to wait before deleting a node after a scale down failure occurred. If you do not specify a value, the default value of 3m is used. 15 Specify the period before an unnecessary node is eligible for deletion. If you do not specify a value, the default value of 10m is used. Note When performing a scaling operation, the cluster autoscaler remains within the ranges set in the ClusterAutoscaler resource definition, such as the minimum and maximum number of cores to deploy or the amount of memory in the cluster. However, the cluster autoscaler does not correct the current values in your cluster to be within those ranges. The minimum and maximum CPUs, memory, and GPU values are determined by calculating those resources on all nodes in the cluster, even if the cluster autoscaler does not manage the nodes. For example, the control plane nodes are considered in the total memory in the cluster, even though the cluster autoscaler does not manage the control plane nodes. 4.7.2. Deploying the cluster autoscaler To deploy the cluster autoscaler, you create an instance of the ClusterAutoscaler resource. Procedure Create a YAML file for the ClusterAutoscaler resource that contains the customized resource definition. Create the resource in the cluster: USD oc create -f <filename>.yaml 1 1 <filename> is the name of the resource file that you customized. 4.8. About the machine autoscaler The machine autoscaler adjusts the number of Machines in the machine sets that you deploy in an OpenShift Container Platform cluster. You can scale both the default worker machine set and any other machine sets that you create. The machine autoscaler makes more Machines when the cluster runs out of resources to support more deployments. Any changes to the values in MachineAutoscaler resources, such as the minimum or maximum number of instances, are immediately applied to the machine set they target. Important You must deploy a machine autoscaler for the cluster autoscaler to scale your machines. The cluster autoscaler uses the annotations on machine sets that the machine autoscaler sets to determine the resources that it can scale. If you define a cluster autoscaler without also defining machine autoscalers, the cluster autoscaler will never scale your cluster. 4.8.1. MachineAutoscaler resource definition This MachineAutoscaler resource definition shows the parameters and sample values for the machine autoscaler. apiVersion: "autoscaling.openshift.io/v1beta1" kind: "MachineAutoscaler" metadata: name: "worker-us-east-1a" 1 namespace: "openshift-machine-api" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6 1 Specify the machine autoscaler name. To make it easier to identify which machine set this machine autoscaler scales, specify or include the name of the machine set to scale. The machine set name takes the following form: <clusterid>-<machineset>-<region> . 2 Specify the minimum number machines of the specified type that must remain in the specified zone after the cluster autoscaler initiates cluster scaling. If running in AWS, GCP, Azure, or RHOSP, this value can be set to 0 . For other providers, do not set this value to 0 . You can save on costs by setting this value to 0 for use cases such as running expensive or limited-usage hardware that is used for specialized workloads, or by scaling a machine set with extra large machines. The cluster autoscaler scales the machine set down to zero if the machines are not in use. Important Do not set the spec.minReplicas value to 0 for the three compute machine sets that are created during the OpenShift Container Platform installation process for an installer provisioned infrastructure. 3 Specify the maximum number machines of the specified type that the cluster autoscaler can deploy in the specified zone after it initiates cluster scaling. Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition is large enough to allow the machine autoscaler to deploy this number of machines. 4 In this section, provide values that describe the existing machine set to scale. 5 The kind parameter value is always MachineSet . 6 The name value must match the name of an existing machine set, as shown in the metadata.name parameter value. 4.8.2. Deploying the machine autoscaler To deploy the machine autoscaler, you create an instance of the MachineAutoscaler resource. Procedure Create a YAML file for the MachineAutoscaler resource that contains the customized resource definition. Create the resource in the cluster: USD oc create -f <filename>.yaml 1 1 <filename> is the name of the resource file that you customized. 4.9. Enabling Technology Preview features using FeatureGates You can turn on a subset of the current Technology Preview features on for all nodes in the cluster by editing the FeatureGate custom resource (CR). 4.9.1. Understanding feature gates You can use the FeatureGate custom resource (CR) to enable specific feature sets in your cluster. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. You can activate the following feature set by using the FeatureGate CR: IPv6DualStackNoUpgrade . This feature gate enables the dual-stack networking mode in your cluster. Dual-stack networking supports the use of IPv4 and IPv6 simultaneously. Enabling this feature set is not supported , cannot be undone, and prevents updates. This feature set is not recommended on production clusters. 4.9.2. Enabling feature sets using the CLI You can use the OpenShift CLI ( oc ) to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To enable feature sets: Edit the FeatureGate CR named cluster : USD oc edit featuregate cluster Sample FeatureGate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: IPv6DualStackNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the IPv6DualStackNoUpgrade feature set to enable the dual-stack networking mode. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Note Enabling the IPv6DualStackNoUpgrade feature set cannot be undone and prevents updates. This feature set is not recommended on production clusters. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. Start a debug session for a node: USD oc debug node/<node_name> Change your root directory to the host: sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 4.10. etcd tasks Back up etcd, enable or disable etcd encryption, or defragment etcd data. 4.10.1. About etcd encryption By default, etcd data is not encrypted in OpenShift Container Platform. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted: Secrets Config maps Routes OAuth access tokens OAuth authorize tokens When you enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. You must have these keys to restore from an etcd backup. Note Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted. 4.10.2. Enabling etcd encryption You can enable etcd encryption to encrypt sensitive resources in your cluster. Warning It is not recommended to take a backup of etcd until the initial encryption process is complete. If the encryption process has not completed, the backup might be only partially encrypted. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Modify the APIServer object: USD oc edit apiserver Set the encryption field type to aescbc : spec: encryption: type: aescbc 1 1 The aescbc type means that AES-CBC with PKCS#7 padding and a 32 byte key is used to perform the encryption. Save the file to apply the changes. The encryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster. Verify that etcd encryption was successful. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully encrypted: USD oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: routes.route.openshift.io If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully encrypted: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: secrets, configmaps If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully encrypted: USD oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. 4.10.3. Disabling etcd encryption You can disable encryption of etcd data in your cluster. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Modify the APIServer object: USD oc edit apiserver Set the encryption field type to identity : spec: encryption: type: identity 1 1 The identity type is the default value and means that no encryption is performed. Save the file to apply the changes. The decryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster. Verify that etcd decryption was successful. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully decrypted: USD oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully decrypted: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully decrypted: USD oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. 4.10.4. Backing up etcd data Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd. Important Only save a backup from a single control plane host (also known as the master host). Do not take a backup from each control plane host in the cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have checked whether the cluster-wide proxy is enabled. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Procedure Start a debug session for a control plane node: USD oc debug node/<node_name> Change your root directory to the host: sh-4.2# chroot /host If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables. Run the cluster-backup.sh script and pass in the location to save the backup to. Tip The cluster-backup.sh script is maintained as a component of the etcd Cluster Operator and is a wrapper around the etcdctl snapshot save command. sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup Example script output found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {"level":"info","ts":1624647639.0188997,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part"} {"level":"info","ts":"2021-06-25T19:00:39.030Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"} {"level":"info","ts":1624647639.0301006,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"} {"level":"info","ts":"2021-06-25T19:00:40.215Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"} {"level":"info","ts":1624647640.6032252,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"114 MB","took":1.584090459} {"level":"info","ts":1624647640.6047094,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {"hash":3866667823,"revision":31407,"totalKey":12828,"totalSize":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup In this example, two files are created in the /home/core/assets/backup/ directory on the control plane host: snapshot_<datetimestamp>.db : This file is the etcd snapshot. The cluster-backup.sh script confirms its validity. static_kuberesources_<datetimestamp>.tar.gz : This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot. Note If etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot. Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted. 4.10.5. Defragmenting etcd data Manual defragmentation must be performed periodically to reclaim disk space after etcd history compaction and other events cause disk fragmentation. History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system. Because etcd writes data to disk, its performance strongly depends on disk performance. Consider defragmenting etcd every month, twice a month, or as needed for your cluster. You can also monitor the etcd_db_total_size_in_bytes metric to determine whether defragmentation is necessary. Warning Defragmenting etcd is a blocking action. The etcd member will not response until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover. Follow this procedure to defragment etcd data on each etcd member. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Determine which etcd member is the leader, because the leader should be defragmented last. Get the list of etcd pods: USD oc get pods -n openshift-etcd -o wide | grep -v quorum-guard | grep etcd Example output etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none> Choose a pod and run the following command to determine which etcd member is the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table Example output Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ Based on the IS LEADER column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com . Defragment an etcd member. Connect to the running etcd container, passing in the name of a pod that is not the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com Unset the ETCDCTL_ENDPOINTS environment variable: sh-4.4# unset ETCDCTL_ENDPOINTS Defragment the etcd member: sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag Example output Finished defragmenting etcd member[https://localhost:2379] If a timeout error occurs, increase the value for --command-timeout until the command succeeds. Verify that the database size was reduced: sh-4.4# etcdctl endpoint status -w table --cluster Example output +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last. Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond. If any NOSPACE alarms were triggered due to the space quota being exceeded, clear them. Check if there are any NOSPACE alarms: sh-4.4# etcdctl alarm list Example output memberID:12345678912345678912 alarm:NOSPACE Clear the alarms: sh-4.4# etcdctl alarm disarm 4.10.6. Restoring to a cluster state You can use a saved etcd backup to restore a cluster state or restore a cluster that has lost the majority of control plane hosts (also known as the master hosts). Important When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd backup that was taken from 4.7.2. Prerequisites Access to the cluster as a user with the cluster-admin role. A healthy control plane host to use as the recovery host. SSH access to control plane hosts. A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz . Important For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one. Procedure Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on. Establish SSH connectivity to each of the control plane nodes, including the recovery host. The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal. Important If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state. Copy the etcd backup directory to the recovery control plane host. This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/ directory of your recovery control plane host. Stop the static pods on any other control plane nodes. Note It is not required to manually stop the pods on the recovery host. The recovery script will stop the pods on the recovery host. Access a control plane host that is not the recovery host. Move the existing etcd pod file out of the kubelet manifest directory: USD sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp Verify that the etcd pods are stopped. USD sudo crictl ps | grep etcd | grep -v operator The output of this command should be empty. If it is not empty, wait a few minutes and check again. Move the existing Kubernetes API server pod file out of the kubelet manifest directory: USD sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp Verify that the Kubernetes API server pods are stopped. USD sudo crictl ps | grep kube-apiserver | grep -v operator The output of this command should be empty. If it is not empty, wait a few minutes and check again. Move the etcd data directory to a different location: USD sudo mv /var/lib/etcd/ /tmp Repeat this step on each of the other control plane hosts that is not the recovery host. Access the recovery control plane host. If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory: USD sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup Example script output ...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml Note The restore process can cause nodes to enter the NotReady state if the node certificates were updated after the last etcd backup. Check the nodes to ensure they are in the Ready state. Run the following command: USD oc get nodes -w Sample output NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.23.3+e419edf host-172-25-75-38 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-40 Ready master 3d20h v1.23.3+e419edf host-172-25-75-65 Ready master 3d20h v1.23.3+e419edf host-172-25-75-74 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-79 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-86 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-98 Ready infra,worker 3d20h v1.23.3+e419edf It can take several minutes for all nodes to report their state. If any nodes are in the NotReady state, log in to the nodes and remove all of the PEM files from the /var/lib/kubelet/pki directory on each node. You can SSH into the nodes or use the terminal window in the web console. USD ssh -i <ssh-key-path> core@<master-hostname> Sample pki directory Restart the kubelet service on all control plane hosts. From the recovery host, run the following command: USD sudo systemctl restart kubelet.service Repeat this step on all other control plane hosts. Approve the pending CSRs: Get the list of current CSRs: USD oc get csr Example output 1 1 2 A pending kubelet service CSR (for user-provisioned installations). 3 4 A pending node-bootstrapper CSR. Review the details of a CSR to verify that it is valid: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid node-bootstrapper CSR: USD oc adm certificate approve <csr_name> For user-provisioned installations, approve each valid kubelet service CSR: USD oc adm certificate approve <csr_name> Verify that the single member control plane has started successfully. From the recovery host, verify that the etcd container is running. USD sudo crictl ps | grep etcd | grep -v operator Example output 3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0 From the recovery host, verify that the etcd pod is running. USD oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd Note If you attempt to run oc login prior to running this command and receive the following error, wait a few moments for the authentication controllers to start and try again. Unable to connect to the server: EOF Example output NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s If the status is Pending , or the output lists more than one running etcd pod, wait a few minutes and check again. Repeat this step for each lost control plane host that is not the recovery host. Delete and recreate other non-recovery, control plane machines, one by one. After these machines are recreated, a new revision is forced and etcd scales up automatically. If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new control plane node using the same method that was used to originally create it. Warning Do not delete and recreate the machine for the recovery host. Obtain the machine for one of the lost control plane hosts. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 This is the control plane machine for the lost control plane host, ip-10-0-131-183.ec2.internal . Save the machine configuration to a file on your file system: USD oc get machine clustername-8qw5l-master-0 \ 1 -n openshift-machine-api \ -o yaml \ > new-master-machine.yaml 1 Specify the name of the control plane machine for the lost control plane host. Edit the new-master-machine.yaml file that was created in the step to assign a new name and remove unnecessary fields. Remove the entire status section: status: addresses: - address: 10.0.131.183 type: InternalIP - address: ip-10-0-131-183.ec2.internal type: InternalDNS - address: ip-10-0-131-183.ec2.internal type: Hostname lastUpdated: "2020-04-20T17:44:29Z" nodeRef: kind: Node name: ip-10-0-131-183.ec2.internal uid: acca4411-af0d-4387-b73e-52b2484295ad phase: Running providerStatus: apiVersion: awsproviderconfig.openshift.io/v1beta1 conditions: - lastProbeTime: "2020-04-20T16:53:50Z" lastTransitionTime: "2020-04-20T16:53:50Z" message: machine successfully created reason: MachineCreationSucceeded status: "True" type: MachineCreation instanceId: i-0fdb85790d76d0c3f instanceState: stopped kind: AWSMachineProviderStatus Change the metadata.name field to a new name. It is recommended to keep the same base name as the old machine and change the ending number to the available number. In this example, clustername-8qw5l-master-0 is changed to clustername-8qw5l-master-3 : apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: ... name: clustername-8qw5l-master-3 ... Remove the spec.providerID field: providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f Remove the metadata.annotations and metadata.generation fields: annotations: machine.openshift.io/instance-state: running ... generation: 2 Remove the metadata.resourceVersion and metadata.uid fields: resourceVersion: "13291" uid: a282eb70-40a2-4e89-8009-d05dd420d31a Delete the machine of the lost control plane host: USD oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1 1 Specify the name of the control plane machine for the lost control plane host. Verify that the machine was deleted: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running Create the new machine using the new-master-machine.yaml file: USD oc apply -f new-master-machine.yaml Verify that the new machine has been created: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Repeat these steps for each lost control plane host that is not the recovery host. In a separate terminal window, log in to the cluster as a user with the cluster-admin role by using the following command: USD oc login -u <cluster_admin> 1 1 For <cluster_admin> , specify a user name with the cluster-admin role. Force etcd redeployment. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up. Verify all nodes are updated to the latest revision. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for etcd to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. After etcd is redeployed, force new rollouts for the control plane. The Kubernetes API server will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer. In a terminal that has access to the cluster as a cluster-admin user, run the following commands. Force a new rollout for the Kubernetes API server: USD oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Force a new rollout for the Kubernetes controller manager: USD oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Force a new rollout for the Kubernetes scheduler: USD oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Verify that all control plane hosts have started and joined the cluster. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd Example output etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h To ensure that all workloads return to normal operation following a recovery procedure, restart each pod that stores Kubernetes API information. This includes OpenShift Container Platform components such as routers, Operators, and third-party components. Note that it might take several minutes after completing this procedure for all services to be restored. For example, authentication by using oc login might not immediately work until the OAuth server pods are restarted. 4.10.7. Issues and workarounds for restoring a persistent storage state If your OpenShift Container Platform cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OpenShift Container Platform is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated. Important The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OpenShift Container Platform cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa. The following are some example scenarios that produce an out-of-date status: MySQL database is running in a pod backed up by a PV object. Restoring OpenShift Container Platform from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume. Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OpenShift Container Platform is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start. Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators. A device is removed or renamed from OpenShift Container Platform nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id or /dev directories. This situation might cause the local PVs to refer to devices that no longer exist. To fix this problem, an administrator must: Manually remove the PVs with invalid devices. Remove symlinks from respective nodes. Delete LocalVolume or LocalVolumeSet objects (see Storage Configuring persistent storage Persistent storage using local volumes Deleting the Local Storage Operator Resources ). 4.11. Pod disruption budgets Understand and configure pod disruption budgets. 4.11.1. Understanding how to use pod disruption budgets to specify the number of pods that must be up A pod disruption budget is part of the Kubernetes API, which can be managed with oc commands like other object types. They allow the specification of safety constraints on pods during operations, such as draining a node for maintenance. PodDisruptionBudget is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures). A PodDisruptionBudget object's configuration consists of the following key parts: A label selector, which is a label query over a set of pods. An availability level, which specifies the minimum number of pods that must be available simultaneously, either: minAvailable is the number of pods must always be available, even during a disruption. maxUnavailable is the number of pods can be unavailable during a disruption. Note A maxUnavailable of 0% or 0 or a minAvailable of 100% or equal to the number of replicas is permitted but can block nodes from being drained. You can check for pod disruption budgets across all projects with the following: USD oc get poddisruptionbudget --all-namespaces Example output NAMESPACE NAME MIN-AVAILABLE SELECTOR another-project another-pdb 4 bar=foo test-project my-pdb 2 foo=bar The PodDisruptionBudget is considered healthy when there are at least minAvailable pods running in the system. Every pod above that limit can be evicted. Note Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements. 4.11.2. Specifying the number of pods that must be up with pod disruption budgets You can use a PodDisruptionBudget object to specify the minimum number or percentage of replicas that must be up at a time. Procedure To configure a pod disruption budget: Create a YAML file with the an object definition similar to the following: apiVersion: policy/v1beta1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: foo: bar 1 PodDisruptionBudget is part of the policy/v1beta1 API group. 2 The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Or: apiVersion: policy/v1beta1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: foo: bar 1 PodDisruptionBudget is part of the policy/v1beta1 API group. 2 The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Run the following command to add the object to project: USD oc create -f </path/to/file> -n <project_name> 4.12. Rotating or removing cloud provider credentials After installing OpenShift Container Platform, some organizations require the rotation or removal of the cloud provider credentials that were used during the initial installation. To allow the cluster to use the new credentials, you must update the secrets that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. 4.12.1. Rotating cloud provider credentials manually If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported. For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Note When rotating the credentials for an Azure cluster that is using mint mode, do not delete or replace the service principal that was used during installation. Instead, generate new Azure service principal client secrets and update the OpenShift Container Platform secrets accordingly. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds Azure azure-credentials GCP gcp-credentials RHOSP openstack-credentials RHV ovirt-credentials vSphere vsphere-creds Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . If the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual CredentialsRequest objects. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Get the names and namespaces of all referenced component secrets: USD oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef' where <provider_spec> is the corresponding value for your cloud provider: AWS: AWSProviderSpec Azure: AzureProviderSpec GCP: GCPProviderSpec RHOSP: OpenStackProviderSpec RHV: OvirtProviderSpec vSphere: VSphereProviderSpec Partial example output for AWS { "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" } ... Delete each of the referenced component secrets: USD oc delete secret <secret_name> \ 1 -n <secret_namespace> 2 1 Specify the name of a secret. 2 Specify the namespace that contains the secret. Example deletion of an AWS secret USD oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones. To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields are different than the previously recorded information. 4.12.2. Removing cloud provider credentials After installing an OpenShift Container Platform cluster with the Cloud Credential Operator (CCO) in mint mode, you can remove the administrator-level credential secret from the kube-system namespace in the cluster. The administrator-level credential is required only during changes that require its elevated permissions, such as upgrades. Note Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked. Prerequisites Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Delete Secret . Additional resources About the Cloud Credential Operator 4.13. Configuring image streams for a disconnected cluster After installing OpenShift Container Platform in a disconnected environment, configure the image streams for the Cluster Samples Operator and the must-gather image stream. 4.13.1. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. 4.13.2. Using Cluster Samples Operator image streams with alternate or mirrored registries Most image streams in the openshift namespace managed by the Cluster Samples Operator point to images located in the Red Hat registry at registry.redhat.io . Mirroring will not apply to these image streams. Important The jenkins , jenkins-agent-maven , and jenkins-agent-nodejs image streams come from the install payload and are managed by the Samples Operator, so no further mirroring procedures are needed for those image streams. Setting the samplesRegistry field in the Sample Operator configuration file to registry.redhat.io is redundant because it is already directed to registry.redhat.io for everything but Jenkins images and image streams. Note The cli , installer , must-gather , and tests image streams, while part of the install payload, are not managed by the Cluster Samples Operator. These are not addressed in this procedure. Prerequisites Access to the cluster as a user with the cluster-admin role. Create a pull secret for your mirror registry. Procedure Access the images of a specific image stream to mirror, for example: USD oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io Mirror images from registry.redhat.io associated with any image streams you need in the restricted network environment into one of the defined mirrors, for example: USD oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest Create the cluster's image configuration object: USD oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config Add the required trusted CAs for the mirror in the cluster's image configuration object: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Update the samplesRegistry field in the Cluster Samples Operator configuration object to contain the hostname portion of the mirror location defined in the mirror configuration: USD oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator Note This is required because the image stream import process does not use the mirror or search mechanism at this time. Add any image streams that are not mirrored into the skippedImagestreams field of the Cluster Samples Operator configuration object. Or if you do not want to support any of the sample image streams, set the Cluster Samples Operator to Removed in the Cluster Samples Operator configuration object. Note The Cluster Samples Operator issues alerts if image stream imports are failing but the Cluster Samples Operator is either periodically retrying or does not appear to be retrying them. Many of the templates in the openshift namespace reference the image streams. So using Removed to purge both the image streams and templates will eliminate the possibility of attempts to use them if they are not functional because of any missing image streams. 4.13.3. Preparing your cluster to gather support data Clusters using a restricted network must import the default must-gather image to gather debugging data for Red Hat support. The must-gather image is not imported by default, and clusters on a restricted network do not have access to the internet to pull the latest image from a remote repository. Procedure If you have not added your mirror registry's trusted CA to your cluster's image configuration object as part of the Cluster Samples Operator configuration, perform the following steps: Create the cluster's image configuration object: USD oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config Add the required trusted CAs for the mirror in the cluster's image configuration object: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Import the default must-gather image from your installation payload: USD oc import-image is/must-gather -n openshift When running the oc adm must-gather command, use the --image flag and point to the payload image, as in the following example: USD oc adm must-gather --image=USD(oc adm release info --image-for must-gather)
[ "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "oc get machinesets -n openshift-machine-api", "oc get machine -n openshift-machine-api", "oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"", "oc adm cordon <node_name> oc adm drain <node_name>", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc get machines", "spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false policy: name: \"\"", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1", "oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.18.3+002a51f", "oc label nodes <name> <key>=<value>", "oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east", "oc get nodes -l <key>=<value>,<key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.18.3+002a51f", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker 2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1", "oc label node <node_name> <label>", "oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=", "cat infra.mcp.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2", "oc create -f infra.mcp.yaml", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d", "cat infra.mc.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra", "oc create -f infra.mc.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m", "oc describe nodes <node_name>", "describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes <node_name> <key>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra:NoSchedule", "tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\"", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.20.0", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: node-role.kubernetes.io/infra: \"\"", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: \"\" prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\"", "oc create -f cluster-monitoring-configmap.yaml", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "oc edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pod kibana-5b8bdf44f9-ccpq9 -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.20.0", "oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml", "kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s", "oc get pod kibana-7d85dcffc8-bfpfp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s", "apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15", "oc create -f <filename>.yaml 1", "apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6", "oc create -f <filename>.yaml 1", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: IPv6DualStackNoUpgrade 2", "oc debug node/<node_name>", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit apiserver", "spec: encryption: type: aescbc 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: routes.route.openshift.io", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: secrets, configmaps", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io", "oc edit apiserver", "spec: encryption: type: identity 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc debug node/<node_name>", "sh-4.2# chroot /host", "sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup", "found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup", "oc get pods -n openshift-etcd -o wide | grep -v quorum-guard | grep etcd", "etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table", "Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com", "sh-4.4# unset ETCDCTL_ENDPOINTS", "sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag", "Finished defragmenting etcd member[https://localhost:2379]", "sh-4.4# etcdctl endpoint status -w table --cluster", "+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "sh-4.4# etcdctl alarm list", "memberID:12345678912345678912 alarm:NOSPACE", "sh-4.4# etcdctl alarm disarm", "sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp", "sudo crictl ps | grep etcd | grep -v operator", "sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp", "sudo crictl ps | grep kube-apiserver | grep -v operator", "sudo mv /var/lib/etcd/ /tmp", "sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup", "...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml", "oc get nodes -w", "NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.23.3+e419edf host-172-25-75-38 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-40 Ready master 3d20h v1.23.3+e419edf host-172-25-75-65 Ready master 3d20h v1.23.3+e419edf host-172-25-75-74 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-79 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-86 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-98 Ready infra,worker 3d20h v1.23.3+e419edf", "ssh -i <ssh-key-path> core@<master-hostname>", "sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem", "sudo systemctl restart kubelet.service", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc adm certificate approve <csr_name>", "sudo crictl ps | grep etcd | grep -v operator", "3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0", "oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd", "Unable to connect to the server: EOF", "NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc get machine clustername-8qw5l-master-0 \\ 1 -n openshift-machine-api -o yaml > new-master-machine.yaml", "status: addresses: - address: 10.0.131.183 type: InternalIP - address: ip-10-0-131-183.ec2.internal type: InternalDNS - address: ip-10-0-131-183.ec2.internal type: Hostname lastUpdated: \"2020-04-20T17:44:29Z\" nodeRef: kind: Node name: ip-10-0-131-183.ec2.internal uid: acca4411-af0d-4387-b73e-52b2484295ad phase: Running providerStatus: apiVersion: awsproviderconfig.openshift.io/v1beta1 conditions: - lastProbeTime: \"2020-04-20T16:53:50Z\" lastTransitionTime: \"2020-04-20T16:53:50Z\" message: machine successfully created reason: MachineCreationSucceeded status: \"True\" type: MachineCreation instanceId: i-0fdb85790d76d0c3f instanceState: stopped kind: AWSMachineProviderStatus", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: clustername-8qw5l-master-3", "providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f", "annotations: machine.openshift.io/instance-state: running generation: 2", "resourceVersion: \"13291\" uid: a282eb70-40a2-4e89-8009-d05dd420d31a", "oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc apply -f new-master-machine.yaml", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc login -u <cluster_admin> 1", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd", "etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h", "oc get poddisruptionbudget --all-namespaces", "NAMESPACE NAME MIN-AVAILABLE SELECTOR another-project another-pdb 4 bar=foo test-project my-pdb 2 foo=bar", "apiVersion: policy/v1beta1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: foo: bar", "apiVersion: policy/v1beta1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: foo: bar", "oc create -f </path/to/file> -n <project_name>", "oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'", "{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }", "oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2", "oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers", "oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io", "oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc import-image is/must-gather -n openshift", "oc adm must-gather --image=USD(oc adm release info --image-for must-gather)" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/post-installation_configuration/post-install-cluster-tasks
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_connected_network_environment/providing-feedback-on-red-hat-documentation_satellite
Chapter 7. Updating OpenShift Virtualization
Chapter 7. Updating OpenShift Virtualization Learn how Operator Lifecycle Manager (OLM) delivers z-stream and minor version updates for OpenShift Virtualization. Note The Node Maintenance Operator (NMO) is no longer shipped with OpenShift Virtualization. You can install the NMO from the OperatorHub in the OpenShift Container Platform web console, or by using the OpenShift CLI ( oc ). For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. You must perform one of the following tasks before updating to OpenShift Virtualization 4.11 from OpenShift Virtualization 4.10.2 and later releases: Move all nodes out of maintenance mode. Install the standalone NMO and replace the nodemaintenances.nodemaintenance.kubevirt.io custom resource (CR) with a nodemaintenances.nodemaintenance.medik8s.io CR. 7.1. About updating OpenShift Virtualization Operator Lifecycle Manager (OLM) manages the lifecycle of the OpenShift Virtualization Operator. The Marketplace Operator, which is deployed during OpenShift Container Platform installation, makes external Operators available to your cluster. OLM provides z-stream and minor version updates for OpenShift Virtualization. Minor version updates become available when you update OpenShift Container Platform to the minor version. You cannot update OpenShift Virtualization to the minor version without first updating OpenShift Container Platform. OpenShift Virtualization subscriptions use a single update channel that is named stable . The stable channel ensures that your OpenShift Virtualization and OpenShift Container Platform versions are compatible. If your subscription's approval strategy is set to Automatic , the update process starts as soon as a new version of the Operator is available in the stable channel. It is highly recommended to use the Automatic approval strategy to maintain a supportable environment. Each minor version of OpenShift Virtualization is only supported if you run the corresponding OpenShift Container Platform version. For example, you must run OpenShift Virtualization 4.12 on OpenShift Container Platform 4.12. Though it is possible to select the Manual approval strategy, this is not recommended because it risks the supportability and functionality of your cluster. With the Manual approval strategy, you must manually approve every pending update. If OpenShift Container Platform and OpenShift Virtualization updates are out of sync, your cluster becomes unsupported. The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes. Updating OpenShift Virtualization does not interrupt network connections. Data volumes and their associated persistent volume claims are preserved during update. Important If you have virtual machines running that use hostpath provisioner storage, they cannot be live migrated and might block an OpenShift Container Platform cluster update. As a workaround, you can reconfigure the virtual machines so that they can be powered off automatically during a cluster update. Remove the evictionStrategy: LiveMigrate field and set the runStrategy field to Always . 7.1.1. About workload updates When you update OpenShift Virtualization, virtual machine workloads, including libvirt , virt-launcher , and qemu , update automatically if they support live migration. Note Each virtual machine has a virt-launcher pod that runs the virtual machine instance (VMI). The virt-launcher pod runs an instance of libvirt , which is used to manage the virtual machine (VM) process. You can configure how workloads are updated by editing the spec.workloadUpdateStrategy stanza of the HyperConverged custom resource (CR). There are two available workload update methods: LiveMigrate and Evict . Because the Evict method shuts down VMI pods, only the LiveMigrate update strategy is enabled by default. When LiveMigrate is the only update strategy enabled: VMIs that support live migration are migrated during the update process. The VM guest moves into a new pod with the updated components enabled. VMIs that do not support live migration are not disrupted or updated. If a VMI has the LiveMigrate eviction strategy but does not support live migration, it is not updated. If you enable both LiveMigrate and Evict : VMIs that support live migration use the LiveMigrate update strategy. VMIs that do not support live migration use the Evict update strategy. If a VMI is controlled by a VirtualMachine object that has a runStrategy value of always , a new VMI is created in a new pod with updated components. Migration attempts and timeouts When updating workloads, live migration fails if a pod is in the Pending state for the following periods: 5 minutes If the pod is pending because it is Unschedulable . 15 minutes If the pod is stuck in the pending state for any reason. When a VMI fails to migrate, the virt-controller tries to migrate it again. It repeats this process until all migratable VMIs are running on new virt-launcher pods. If a VMI is improperly configured, however, these attempts can repeat indefinitely. Note Each attempt corresponds to a migration object. Only the five most recent attempts are held in a buffer. This prevents migration objects from accumulating on the system while retaining information for debugging. 7.1.2. About EUS-to-EUS updates Every even-numbered minor version of OpenShift Container Platform, including 4.10 and 4.12, is an Extended Update Support (EUS) version. However, because Kubernetes design mandates serial minor version updates, you cannot directly update from one EUS version to the . After you update from the source EUS version to the odd-numbered minor version, you must sequentially update OpenShift Virtualization to all z-stream releases of that minor version that are on your update path. When you have upgraded to the latest applicable z-stream version, you can then update OpenShift Container Platform to the target EUS minor version. When the OpenShift Container Platform update succeeds, the corresponding update for OpenShift Virtualization becomes available. You can now update OpenShift Virtualization to the target EUS version. 7.1.2.1. Preparing to update Before beginning an EUS-to-EUS update, you must: Pause worker nodes' machine config pools before you start an EUS-to-EUS update so that the workers are not rebooted twice. Disable automatic workload updates before you begin the update process. This is to prevent OpenShift Virtualization from migrating or evicting your virtual machines (VMs) until you update to your target EUS version. Note By default, OpenShift Virtualization automatically updates workloads, such as the virt-launcher pod, when you update the OpenShift Virtualization Operator. You can configure this behavior in the spec.workloadUpdateStrategy stanza of the HyperConverged custom resource. Learn more about preparing to perform an EUS-to-EUS update . 7.2. Preventing workload updates during an EUS-to-EUS update When you update from one Extended Update Support (EUS) version to the , you must manually disable automatic workload updates to prevent OpenShift Virtualization from migrating or evicting workloads during the update process. Prerequisites You are running an EUS version of OpenShift Container Platform and want to update to the EUS version. You have not yet updated to the odd-numbered version in between. You read "Preparing to perform an EUS-to-EUS update" and learned the caveats and requirements that pertain to your OpenShift Container Platform cluster. You paused the worker nodes' machine config pools as directed by the OpenShift Container Platform documentation. It is recommended that you use the default Automatic approval strategy. If you use the Manual approval strategy, you must approve all pending updates in the web console. For more details, refer to the "Manually approving a pending Operator update" section. Procedure Back up the current workloadUpdateMethods configuration by running the following command: USD WORKLOAD_UPDATE_METHODS=USD(oc get kv kubevirt-kubevirt-hyperconverged -n openshift-cnv -o jsonpath='{.spec.workloadUpdateStrategy.workloadUpdateMethods}') Turn off all workload update methods by running the following command: USD oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op":"replace","path":"/spec/workloadUpdateStrategy/workloadUpdateMethods", "value":[]}]' Example output hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched Ensure that the HyperConverged Operator is Upgradeable before you continue. Enter the following command and monitor the output: USD oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.conditions" Example 7.1. Example output [ { "lastTransitionTime": "2022-12-09T16:29:11Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "True", "type": "ReconcileComplete" }, { "lastTransitionTime": "2022-12-09T20:30:10Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "True", "type": "Available" }, { "lastTransitionTime": "2022-12-09T20:30:10Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "False", "type": "Progressing" }, { "lastTransitionTime": "2022-12-09T16:39:11Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "False", "type": "Degraded" }, { "lastTransitionTime": "2022-12-09T20:30:10Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "True", "type": "Upgradeable" 1 } ] 1 The OpenShift Virtualization Operator has the Upgradeable status. Manually update your cluster from the source EUS version to the minor version of OpenShift Container Platform: USD oc adm upgrade Verification Check the current version by running the following command: USD oc get clusterversion Note Updating OpenShift Container Platform to the version is a prerequisite for updating OpenShift Virtualization. For more details, refer to the "Updating clusters" section of the OpenShift Container Platform documentation. Update OpenShift Virtualization. With the default Automatic approval strategy, OpenShift Virtualization automatically updates to the corresponding version after you update OpenShift Container Platform. If you use the Manual approval strategy, approve the pending updates by using the web console. Monitor the OpenShift Virtualization update by running the following command: USD oc get csv -n openshift-cnv Update OpenShift Virtualization to every z-stream version that is available for the non-EUS minor version, monitoring each update by running the command shown in the step. Confirm that OpenShift Virtualization successfully updated to the latest z-stream release of the non-EUS version by running the following command: USD oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.versions" Example output [ { "name": "operator", "version": "4.12.16" } ] Wait until the HyperConverged Operator has the Upgradeable status before you perform the update. Enter the following command and monitor the output: USD oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.conditions" Update OpenShift Container Platform to the target EUS version. Confirm that the update succeeded by checking the cluster version: USD oc get clusterversion Update OpenShift Virtualization to the target EUS version. With the default Automatic approval strategy, OpenShift Virtualization automatically updates to the corresponding version after you update OpenShift Container Platform. If you use the Manual approval strategy, approve the pending updates by using the web console. Monitor the OpenShift Virtualization update by running the following command: USD oc get csv -n openshift-cnv The update completes when the VERSION field matches the target EUS version and the PHASE field reads Succeeded . Restore the workload update methods configuration that you backed up: USD oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p "[{\"op\":\"add\",\"path\":\"/spec/workloadUpdateStrategy/workloadUpdateMethods\", \"value\":USDWORKLOAD_UPDATE_METHODS}]" Example output hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched Verification Check the status of VM migration by running the following command: USD oc get vmim -A steps You can now unpause the worker nodes' machine config pools. 7.3. Configuring workload update methods You can configure workload update methods by editing the HyperConverged custom resource (CR). Prerequisites To use live migration as an update method, you must first enable live migration in the cluster. Note If a VirtualMachineInstance CR contains evictionStrategy: LiveMigrate and the virtual machine instance (VMI) does not support live migration, the VMI will not update. Procedure To open the HyperConverged CR in your default editor, run the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Edit the workloadUpdateStrategy stanza of the HyperConverged CR. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: "1m0s" 5 ... 1 The methods that can be used to perform automated workload updates. The available values are LiveMigrate and Evict . If you enable both options as shown in this example, updates use LiveMigrate for VMIs that support live migration and Evict for any VMIs that do not support live migration. To disable automatic workload updates, you can either remove the workloadUpdateStrategy stanza or set workloadUpdateMethods: [] to leave the array empty. 2 The least disruptive update method. VMIs that support live migration are updated by migrating the virtual machine (VM) guest into a new pod with the updated components enabled. If LiveMigrate is the only workload update method listed, VMIs that do not support live migration are not disrupted or updated. 3 A disruptive method that shuts down VMI pods during upgrade. Evict is the only update method available if live migration is not enabled in the cluster. If a VMI is controlled by a VirtualMachine object that has runStrategy: always configured, a new VMI is created in a new pod with updated components. 4 The number of VMIs that can be forced to be updated at a time by using the Evict method. This does not apply to the LiveMigrate method. 5 The interval to wait before evicting the batch of workloads. This does not apply to the LiveMigrate method. Note You can configure live migration limits and timeouts by editing the spec.liveMigrationConfig stanza of the HyperConverged CR. To apply your changes, save and exit the editor. 7.4. Approving pending Operator updates 7.4.1. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any update requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 7.5. Monitoring update status 7.5.1. Monitoring OpenShift Virtualization upgrade status To monitor the status of a OpenShift Virtualization Operator upgrade, watch the cluster service version (CSV) PHASE . You can also monitor the CSV conditions in the web console or by running the command provided here. Note The PHASE and conditions values are approximations that are based on available information. Prerequisites Log in to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Run the following command: USD oc get csv -n openshift-cnv Review the output, checking the PHASE field. For example: Example output VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing Optional: Monitor the aggregated status of all OpenShift Virtualization component conditions by running the following command: USD oc get hco -n openshift-cnv kubevirt-hyperconverged \ -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}' A successful upgrade results in the following output: Example output ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully 7.5.2. Viewing outdated OpenShift Virtualization workloads You can view a list of outdated workloads by using the CLI. Note If there are outdated virtualization pods in your cluster, the OutdatedVirtualMachineInstanceWorkloads alert fires. Procedure To view a list of outdated virtual machine instances (VMIs), run the following command: USD oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces Note Configure workload updates to ensure that VMIs update automatically. 7.6. Additional resources Preparing to perform an EUS-to-EUS update What are Operators? Operator Lifecycle Manager concepts and resources Cluster service versions (CSVs) Virtual machine live migration Configuring virtual machine eviction strategy Configuring live migration limits and timeouts
[ "WORKLOAD_UPDATE_METHODS=USD(oc get kv kubevirt-kubevirt-hyperconverged -n openshift-cnv -o jsonpath='{.spec.workloadUpdateStrategy.workloadUpdateMethods}')", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/workloadUpdateStrategy/workloadUpdateMethods\", \"value\":[]}]'", "hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched", "oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.conditions\"", "[ { \"lastTransitionTime\": \"2022-12-09T16:29:11Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"ReconcileComplete\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"Available\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"False\", \"type\": \"Progressing\" }, { \"lastTransitionTime\": \"2022-12-09T16:39:11Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"False\", \"type\": \"Degraded\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"Upgradeable\" 1 } ]", "oc adm upgrade", "oc get clusterversion", "oc get csv -n openshift-cnv", "oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.versions\"", "[ { \"name\": \"operator\", \"version\": \"4.12.16\" } ]", "oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.conditions\"", "oc get clusterversion", "oc get csv -n openshift-cnv", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p \"[{\\\"op\\\":\\\"add\\\",\\\"path\\\":\\\"/spec/workloadUpdateStrategy/workloadUpdateMethods\\\", \\\"value\\\":USDWORKLOAD_UPDATE_METHODS}]\"", "hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched", "oc get vmim -A", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: \"1m0s\" 5", "oc get csv -n openshift-cnv", "VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing", "oc get hco -n openshift-cnv kubevirt-hyperconverged -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}'", "ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully", "oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/virtualization/upgrading-virt
Chapter 14. CertAndKeySecretSource schema reference
Chapter 14. CertAndKeySecretSource schema reference Used in: GenericKafkaListenerConfiguration , KafkaClientAuthenticationTls Property Property type Description secretName string The name of the Secret containing the certificate. certificate string The name of the file certificate in the Secret. key string The name of the private key in the Secret.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-CertAndKeySecretSource-reference
Chapter 2. Understanding disconnected installation mirroring
Chapter 2. Understanding disconnected installation mirroring You can use a mirror registry for disconnected installations and to ensure that your clusters only use container images that satisfy your organization's controls on external content. Before you install a cluster on infrastructure that you provision in a disconnected environment, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. 2.1. Mirroring images for a disconnected installation through the Agent-based Installer You can use one of the following procedures to mirror your OpenShift Container Platform image repository to your mirror registry: Mirroring images for a disconnected installation Mirroring images for a disconnected installation using the oc-mirror plugin 2.2. About mirroring the OpenShift Container Platform image repository for a disconnected registry To use mirror images for a disconnected installation with the Agent-based Installer, you must modify the install-config.yaml file. You can mirror the release image by using the output of either the oc adm release mirror or oc mirror command. This is dependent on which command you used to set up the mirror registry. The following example shows the output of the oc adm release mirror command. USD oc adm release mirror Example output To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release The following example shows part of the imageContentSourcePolicy.yaml file generated by the oc-mirror plugin. The file can be found in the results directory, for example oc-mirror-workspace/results-1682697932/ . Example imageContentSourcePolicy.yaml file spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release 2.2.1. Configuring the Agent-based Installer to use mirrored images You must use the output of either the oc adm release mirror command or the oc-mirror plugin to configure the Agent-based Installer to use mirrored images. Procedure If you used the oc-mirror plugin to mirror your release images: Open the imageContentSourcePolicy.yaml located in the results directory, for example oc-mirror-workspace/results-1682697932/ . Copy the text in the repositoryDigestMirrors section of the yaml file. If you used the oc adm release mirror command to mirror your release images: Copy the text in the imageContentSources section of the command output. Paste the copied text into the imageContentSources field of the install-config.yaml file. Add the certificate file used for the mirror registry to the additionalTrustBundle field of the yaml file. Important The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Example install-config.yaml file additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- If you are using GitOps ZTP manifests: add the registries.conf and ca-bundle.crt files to the mirror path to add the mirror configuration in the agent ISO image. Note You can create the registries.conf file from the output of either the oc adm release mirror command or the oc mirror plugin. The format of the /etc/containers/registries.conf file has changed. It is now version 2 and in TOML format. Example registries.conf file [[registry]] location = "registry.ci.openshift.org/ocp/release" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image" [[registry]] location = "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image"
[ "oc adm release mirror", "To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release", "spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "[[registry]] location = \"registry.ci.openshift.org/ocp/release\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\" [[registry]] location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_an_on-premise_cluster_with_the_agent-based_installer/understanding-disconnected-installation-mirroring
Appendix D. Configuring a Host for PCI Passthrough
Appendix D. Configuring a Host for PCI Passthrough Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV Enabling PCI passthrough allows a virtual machine to use a host device as if the device were directly attached to the virtual machine. To enable the PCI passthrough function, you must enable virtualization extensions and the IOMMU function. The following procedure requires you to reboot the host. If the host is attached to the Manager already, ensure you place the host into maintenance mode first. Prerequisites Ensure that the host hardware meets the requirements for PCI device passthrough and assignment. See PCI Device Requirements for more information. Configuring a Host for PCI Passthrough Enable the virtualization extension and IOMMU extension in the BIOS. See Enabling Intel VT-x and AMD-V virtualization hardware extensions in BIOS in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide for more information. Enable the IOMMU flag in the kernel by selecting the Hostdev Passthrough & SR-IOV check box when adding the host to the Manager or by editing the grub configuration file manually. To enable the IOMMU flag from the Administration Portal, see Adding Standard Hosts to the Red Hat Virtualization Manager and Kernel Settings Explained . To edit the grub configuration file manually, see Enabling IOMMU Manually . For GPU passthrough, you need to run additional configuration steps on both the host and the guest system. See GPU device passthrough: Assigning a host GPU to a single virtual machine in Setting up an NVIDIA GPU for a virtual machine in Red Hat Virtualization for more information. Enabling IOMMU Manually Enable IOMMU by editing the grub configuration file. Note If you are using IBM POWER8 hardware, skip this step as IOMMU is enabled by default. For Intel, boot the machine, and append intel_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. For AMD, boot the machine, and append amd_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. # vi /etc/default/grub ... GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 ... amd_iommu=on ... Note If intel_iommu=on or an AMD IOMMU is detected, you can try adding iommu=pt . The pt option only enables IOMMU for devices used in passthrough and provides better host performance. However, the option might not be supported on all hardware. Revert to the option if the pt option doesn't work for your host. If the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling the allow_unsafe_interrupts option if the virtual machines are trusted. The allow_unsafe_interrupts is not enabled by default because enabling it potentially exposes the host to MSI attacks from virtual machines. To enable the option: Refresh the grub.cfg file and reboot the host for these changes to take effect: # grub2-mkconfig -o /boot/grub2/grub.cfg # reboot
[ "vi /etc/default/grub GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... intel_iommu=on", "vi /etc/default/grub ... GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... amd_iommu=on ...", "vi /etc/modprobe.d options vfio_iommu_type1 allow_unsafe_interrupts=1", "grub2-mkconfig -o /boot/grub2/grub.cfg", "reboot" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/Configuring_a_Host_for_PCI_Passthrough_migrate_DWH_DB
Chapter 6. Cluster Operators reference
Chapter 6. Cluster Operators reference This reference guide indexes the cluster Operators shipped by Red Hat that serve as the architectural foundation for OpenShift Container Platform. Cluster Operators are installed by default, unless otherwise noted, and are managed by the Cluster Version Operator (CVO). For more details on the control plane architecture, see Operators in OpenShift Container Platform . Cluster administrators can view cluster Operators in the OpenShift Container Platform web console from the Administration Cluster Settings page. Note Cluster Operators are not managed by Operator Lifecycle Manager (OLM) and OperatorHub. OLM and OperatorHub are part of the Operator Framework used in OpenShift Container Platform for installing and running optional add-on Operators . Some of the following cluster Operators can be disabled prior to installation. For more information see cluster capabilities . 6.1. Cluster Baremetal Operator Note The Cluster Baremetal Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster Baremetal Operator (CBO) deploys all the components necessary to take a bare-metal server to a fully functioning worker node ready to run OpenShift Container Platform compute nodes. The CBO ensures that the metal3 deployment, which consists of the Bare Metal Operator (BMO) and Ironic containers, runs on one of the control plane nodes within the OpenShift Container Platform cluster. The CBO also listens for OpenShift Container Platform updates to resources that it watches and takes appropriate action. Project cluster-baremetal-operator Additional resources Bare-metal capability 6.2. Bare Metal Event Relay Purpose The OpenShift Bare Metal Event Relay manages the life-cycle of the Bare Metal Event Relay. The Bare Metal Event Relay enables you to configure the types of cluster event that are monitored using Redfish hardware events. Configuration objects You can use this command to edit the configuration after installation: for example, the webhook port. You can edit configuration objects with: USD oc -n [namespace] edit cm hw-event-proxy-operator-manager-config apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 kind: ControllerManagerConfig health: healthProbeBindAddress: :8081 metrics: bindAddress: 127.0.0.1:8080 webhook: port: 9443 leaderElection: leaderElect: true resourceName: 6e7a703c.redhat-cne.org Project hw-event-proxy-operator CRD The proxy enables applications running on bare-metal clusters to respond quickly to Redfish hardware changes and failures such as breaches of temperature thresholds, fan failure, disk loss, power outages, and memory failure, reported using the HardwareEvent CR. hardwareevents.event.redhat-cne.org : Scope: Namespaced CR: HardwareEvent Validation: Yes 6.3. Cloud Credential Operator Purpose The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run. By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string ( "" ), the CCO operates in its default mode. Project openshift-cloud-credential-operator CRDs credentialsrequests.cloudcredential.openshift.io Scope: Namespaced CR: CredentialsRequest Validation: Yes Configuration objects No configuration required. Additional resources About the Cloud Credential Operator CredentialsRequest custom resource 6.4. Cluster Authentication Operator Purpose The Cluster Authentication Operator installs and maintains the Authentication custom resource in a cluster and can be viewed with: USD oc get clusteroperator authentication -o yaml Project cluster-authentication-operator 6.5. Cluster Autoscaler Operator Purpose The Cluster Autoscaler Operator manages deployments of the OpenShift Cluster Autoscaler using the cluster-api provider. Project cluster-autoscaler-operator CRDs ClusterAutoscaler : This is a singleton resource, which controls the configuration autoscaler instance for the cluster. The Operator only responds to the ClusterAutoscaler resource named default in the managed namespace, the value of the WATCH_NAMESPACE environment variable. MachineAutoscaler : This resource targets a node group and manages the annotations to enable and configure autoscaling for that group, the min and max size. Currently only MachineSet objects can be targeted. 6.6. Cluster Cloud Controller Manager Operator Purpose Note The status of this Operator is General Availability for Amazon Web Services (AWS), IBM Cloud(R), global Microsoft Azure, Microsoft Azure Stack Hub, Nutanix, Red Hat OpenStack Platform (RHOSP), and VMware vSphere. The Operator is available as a Technology Preview for Alibaba Cloud, Google Cloud Platform (GCP), and IBM Cloud(R) Power VS. The Cluster Cloud Controller Manager Operator manages and updates the cloud controller managers deployed on top of OpenShift Container Platform. The Operator is based on the Kubebuilder framework and controller-runtime libraries. It is installed via the Cluster Version Operator (CVO). It contains the following components: Operator Cloud configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-cloud-controller-manager-operator 6.7. Cluster CAPI Operator Note This Operator is available as a Technology Preview for Amazon Web Services (AWS) and Google Cloud Platform (GCP) clusters. Purpose The Cluster CAPI Operator maintains the lifecycle of Cluster API resources. This Operator is responsible for all administrative tasks related to deploying the Cluster API project within an OpenShift Container Platform cluster. Project cluster-capi-operator CRDs awsmachines.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: awsmachine Validation: No gcpmachines.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: gcpmachine Validation: No awsmachinetemplates.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: awsmachinetemplate Validation: No gcpmachinetemplates.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: gcpmachinetemplate Validation: No 6.8. Cluster Config Operator Purpose The Cluster Config Operator performs the following tasks related to config.openshift.io : Creates CRDs. Renders the initial custom resources. Handles migrations. Project cluster-config-operator 6.9. Cluster CSI Snapshot Controller Operator Note The Cluster CSI Snapshot Controller Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster CSI Snapshot Controller Operator installs and maintains the CSI Snapshot Controller. The CSI Snapshot Controller is responsible for watching the VolumeSnapshot CRD objects and manages the creation and deletion lifecycle of volume snapshots. Project cluster-csi-snapshot-controller-operator Additional resources CSI snapshot controller capability 6.10. Cluster Image Registry Operator Purpose The Cluster Image Registry Operator manages a singleton instance of the OpenShift image registry. It manages all configuration of the registry, including creating storage. On initial start up, the Operator creates a default image-registry resource instance based on the configuration detected in the cluster. This indicates what cloud storage type to use based on the cloud provider. If insufficient information is available to define a complete image-registry resource, then an incomplete resource is defined and the Operator updates the resource status with information about what is missing. The Cluster Image Registry Operator runs in the openshift-image-registry namespace and it also manages the registry instance in that location. All configuration and workload resources for the registry reside in that namespace. Project cluster-image-registry-operator 6.11. Cluster Machine Approver Operator Purpose The Cluster Machine Approver Operator automatically approves the CSRs requested for a new worker node after cluster installation. Note For the control plane node, the approve-csr service on the bootstrap node automatically approves all CSRs during the cluster bootstrapping phase. Project cluster-machine-approver-operator 6.12. Cluster Monitoring Operator Purpose The Cluster Monitoring Operator (CMO) manages and updates the Prometheus-based cluster monitoring stack deployed on top of OpenShift Container Platform. Project openshift-monitoring CRDs alertmanagers.monitoring.coreos.com Scope: Namespaced CR: alertmanager Validation: Yes prometheuses.monitoring.coreos.com Scope: Namespaced CR: prometheus Validation: Yes prometheusrules.monitoring.coreos.com Scope: Namespaced CR: prometheusrule Validation: Yes servicemonitors.monitoring.coreos.com Scope: Namespaced CR: servicemonitor Validation: Yes Configuration objects USD oc -n openshift-monitoring edit cm cluster-monitoring-config 6.13. Cluster Network Operator Purpose The Cluster Network Operator installs and upgrades the networking components on an OpenShift Container Platform cluster. 6.14. Cluster Samples Operator Note The Cluster Samples Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster Samples Operator manages the sample image streams and templates stored in the openshift namespace. On initial start up, the Operator creates the default samples configuration resource to initiate the creation of the image streams and templates. The configuration object is a cluster scoped object with the key cluster and type configs.samples . The image streams are the Red Hat Enterprise Linux CoreOS (RHCOS)-based OpenShift Container Platform image streams pointing to images on registry.redhat.io . Similarly, the templates are those categorized as OpenShift Container Platform templates. The Cluster Samples Operator deployment is contained within the openshift-cluster-samples-operator namespace. On start up, the install pull secret is used by the image stream import logic in the OpenShift image registry and API server to authenticate with registry.redhat.io . An administrator can create any additional secrets in the openshift namespace if they change the registry used for the sample image streams. If created, those secrets contain the content of a config.json for docker needed to facilitate image import. The image for the Cluster Samples Operator contains image stream and template definitions for the associated OpenShift Container Platform release. After the Cluster Samples Operator creates a sample, it adds an annotation that denotes the OpenShift Container Platform version that it is compatible with. The Operator uses this annotation to ensure that each sample matches the compatible release version. Samples outside of its inventory are ignored, as are skipped samples. Modifications to any samples that are managed by the Operator are allowed as long as the version annotation is not modified or deleted. However, on an upgrade, as the version annotation will change, those modifications can get replaced as the sample will be updated with the newer version. The Jenkins images are part of the image payload from the installation and are tagged into the image streams directly. The samples resource includes a finalizer, which cleans up the following upon its deletion: Operator-managed image streams Operator-managed templates Operator-generated configuration resources Cluster status resources Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration. Project cluster-samples-operator Additional resources OpenShift samples capability 6.15. Cluster Storage Operator Note The Cluster Storage Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster Storage Operator sets OpenShift Container Platform cluster-wide storage defaults. It ensures a default storageclass exists for OpenShift Container Platform clusters. It also installs Container Storage Interface (CSI) drivers which enable your cluster to use various storage backends. Project cluster-storage-operator Configuration No configuration is required. Notes The storage class that the Operator creates can be made non-default by editing its annotation, but this storage class cannot be deleted as long as the Operator runs. Additional resources Storage capability 6.16. Cluster Version Operator Purpose Cluster Operators manage specific areas of cluster functionality. The Cluster Version Operator (CVO) manages the lifecycle of cluster Operators, many of which are installed in OpenShift Container Platform by default. The CVO also checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph by collecting the status of both the cluster version and its cluster Operators. This status includes the condition type, which informs you of the health and current state of the OpenShift Container Platform cluster. For more information regarding cluster version condition types, see "Understanding cluster version condition types". Project cluster-version-operator Additional resources Understanding cluster version condition types 6.17. Console Operator Note The Console Operator is an optional cluster capability that can be disabled by cluster administrators during installation. If you disable the Console Operator at installation, your cluster is still supported and upgradable. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Console Operator installs and maintains the OpenShift Container Platform web console on a cluster. The Console Operator is installed by default and automatically maintains a console. Project console-operator Additional resources Web console capability 6.18. Control Plane Machine Set Operator Note This Operator is available for Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Nutanix, and VMware vSphere. Purpose The Control Plane Machine Set Operator automates the management of control plane machine resources within an OpenShift Container Platform cluster. Project cluster-control-plane-machine-set-operator CRDs controlplanemachineset.machine.openshift.io Scope: Namespaced CR: ControlPlaneMachineSet Validation: Yes Additional resources About control plane machine sets ControlPlaneMachineSet custom resource 6.19. DNS Operator Purpose The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods that enables DNS-based Kubernetes Service discovery in OpenShift Container Platform. The Operator creates a working default deployment based on the cluster's configuration. The default cluster domain is cluster.local . Configuration of the CoreDNS Corefile or Kubernetes plugin is not yet supported. The DNS Operator manages CoreDNS as a Kubernetes daemon set exposed as a service with a static IP. CoreDNS runs on all nodes in the cluster. Project cluster-dns-operator 6.20. etcd cluster Operator Purpose The etcd cluster Operator automates etcd cluster scaling, enables etcd monitoring and metrics, and simplifies disaster recovery procedures. Project cluster-etcd-operator CRDs etcds.operator.openshift.io Scope: Cluster CR: etcd Validation: Yes Configuration objects USD oc edit etcd cluster 6.21. Ingress Operator Purpose The Ingress Operator configures and manages the OpenShift Container Platform router. Project openshift-ingress-operator CRDs clusteringresses.ingress.openshift.io Scope: Namespaced CR: clusteringresses Validation: No Configuration objects Cluster config Type Name: clusteringresses.ingress.openshift.io Instance Name: default View Command: USD oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml Notes The Ingress Operator sets up the router in the openshift-ingress project and creates the deployment for the router: USD oc get deployment -n openshift-ingress The Ingress Operator uses the clusterNetwork[].cidr from the network/cluster status to determine what mode (IPv4, IPv6, or dual stack) the managed Ingress Controller (router) should operate in. For example, if clusterNetwork contains only a v6 cidr , then the Ingress Controller operates in IPv6-only mode. In the following example, Ingress Controllers managed by the Ingress Operator will run in IPv4-only mode because only one cluster network exists and the network is an IPv4 cidr : USD oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}' Example output map[cidr:10.128.0.0/14 hostPrefix:23] 6.22. Insights Operator Note The Insights Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce proactive insights recommendations about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators through Insights Advisor on console.redhat.com . Project insights-operator Configuration No configuration is required. Notes Insights Operator complements OpenShift Container Platform Telemetry. Additional resources Insights capability See About remote health monitoring for details about Insights Operator and Telemetry. 6.23. Kubernetes API Server Operator Purpose The Kubernetes API Server Operator manages and updates the Kubernetes API server deployed on top of OpenShift Container Platform. The Operator is based on the OpenShift Container Platform library-go framework and it is installed using the Cluster Version Operator (CVO). Project openshift-kube-apiserver-operator CRDs kubeapiservers.operator.openshift.io Scope: Cluster CR: kubeapiserver Validation: Yes Configuration objects USD oc edit kubeapiserver 6.24. Kubernetes Controller Manager Operator Purpose The Kubernetes Controller Manager Operator manages and updates the Kubernetes Controller Manager deployed on top of OpenShift Container Platform. The Operator is based on OpenShift Container Platform library-go framework and it is installed via the Cluster Version Operator (CVO). It contains the following components: Operator Bootstrap manifest renderer Installer based on static pods Configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-kube-controller-manager-operator 6.25. Kubernetes Scheduler Operator Purpose The Kubernetes Scheduler Operator manages and updates the Kubernetes Scheduler deployed on top of OpenShift Container Platform. The Operator is based on the OpenShift Container Platform library-go framework and it is installed with the Cluster Version Operator (CVO). The Kubernetes Scheduler Operator contains the following components: Operator Bootstrap manifest renderer Installer based on static pods Configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-kube-scheduler-operator Configuration The configuration for the Kubernetes Scheduler is the result of merging: a default configuration. an observed configuration from the spec schedulers.config.openshift.io . All of these are sparse configurations, invalidated JSON snippets which are merged to form a valid configuration at the end. 6.26. Kubernetes Storage Version Migrator Operator Purpose The Kubernetes Storage Version Migrator Operator detects changes of the default storage version, creates migration requests for resource types when the storage version changes, and processes migration requests. Project cluster-kube-storage-version-migrator-operator 6.27. Machine API Operator Purpose The Machine API Operator manages the lifecycle of specific purpose custom resource definitions (CRD), controllers, and RBAC objects that extend the Kubernetes API. This declares the desired state of machines in a cluster. Project machine-api-operator CRDs MachineSet Machine MachineHealthCheck 6.28. Machine Config Operator Purpose The Machine Config Operator manages and applies configuration and updates of the base operating system and container runtime, including everything between the kernel and kubelet. There are four components: machine-config-server : Provides Ignition configuration to new machines joining the cluster. machine-config-controller : Coordinates the upgrade of machines to the desired configurations defined by a MachineConfig object. Options are provided to control the upgrade for sets of machines individually. machine-config-daemon : Applies new machine configuration during update. Validates and verifies the state of the machine to the requested machine configuration. machine-config : Provides a complete source of machine configuration at installation, first start up, and updates for a machine. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Additional resources About the OpenShift SDN network plugin . Project openshift-machine-config-operator 6.29. Marketplace Operator Note The Marketplace Operator is an optional cluster capability that can be disabled by cluster administrators if it is not needed. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Marketplace Operator simplifies the process for bringing off-cluster Operators to your cluster by using a set of default Operator Lifecycle Manager (OLM) catalogs on the cluster. When the Marketplace Operator is installed, it creates the openshift-marketplace namespace. OLM ensures catalog sources installed in the openshift-marketplace namespace are available for all namespaces on the cluster. Project operator-marketplace Additional resources Marketplace capability 6.30. Node Tuning Operator Purpose The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. Note Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. Project cluster-node-tuning-operator Additional resources About low latency 6.31. OpenShift API Server Operator Purpose The OpenShift API Server Operator installs and maintains the openshift-apiserver on a cluster. Project openshift-apiserver-operator CRDs openshiftapiservers.operator.openshift.io Scope: Cluster CR: openshiftapiserver Validation: Yes 6.32. OpenShift Controller Manager Operator Purpose The OpenShift Controller Manager Operator installs and maintains the OpenShiftControllerManager custom resource in a cluster and can be viewed with: USD oc get clusteroperator openshift-controller-manager -o yaml The custom resource definition (CRD) openshiftcontrollermanagers.operator.openshift.io can be viewed in a cluster with: USD oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml Project cluster-openshift-controller-manager-operator 6.33. Operator Lifecycle Manager Operators Purpose Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework , an open source toolkit designed to manage Operators in an effective, automated, and scalable way. Figure 6.1. Operator Lifecycle Manager workflow OLM runs by default in OpenShift Container Platform 4.14, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it. CRDs Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator. Each of these Operators is responsible for managing the custom resource definitions (CRDs) that are the basis for the OLM framework: Table 6.1. CRDs managed by OLM and Catalog Operators Resource Short name Owner Description ClusterServiceVersion (CSV) csv OLM Application metadata: name, version, icon, required resources, installation, and so on. InstallPlan ip Catalog Calculated list of resources to be created to automatically install or upgrade a CSV. CatalogSource catsrc Catalog A repository of CSVs, CRDs, and packages that define an application. Subscription sub Catalog Used to keep CSVs up to date by tracking a channel in a package. OperatorGroup og OLM Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. Each of these Operators is also responsible for creating the following resources: Table 6.2. Resources created by OLM and Catalog Operators Resource Owner Deployments OLM ServiceAccounts (Cluster)Roles (Cluster)RoleBindings CustomResourceDefinitions (CRDs) Catalog ClusterServiceVersions OLM Operator The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster. The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application. The OLM Operator uses the following workflow: Watch for cluster service versions (CSVs) in a namespace and check that requirements are met. If requirements are met, run the install strategy for the CSV. Note A CSV must be an active member of an Operator group for the install strategy to run. Catalog Operator The Catalog Operator is responsible for resolving and installing cluster service versions (CSVs) and the required resources they specify. It is also responsible for watching catalog sources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions. To track a package in a channel, you can create a Subscription object configuring the desired package, channel, and the CatalogSource object you want to use for pulling updates. When updates are found, an appropriate InstallPlan object is written into the namespace on behalf of the user. The Catalog Operator uses the following workflow: Connect to each catalog source in the cluster. Watch for unresolved install plans created by a user, and if found: Find the CSV matching the name requested and add the CSV as a resolved resource. For each managed or required CRD, add the CRD as a resolved resource. For each required CRD, find the CSV that manages it. Watch for resolved install plans and create all of the discovered resources for it, if approved by a user or automatically. Watch for catalog sources and subscriptions and create install plans based on them. Catalog Registry The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels. A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version. Additional resources For more information, see the sections on understanding Operator Lifecycle Manager (OLM) . 6.34. OpenShift Service CA Operator Purpose The OpenShift Service CA Operator mints and manages serving certificates for Kubernetes services. Project openshift-service-ca-operator 6.35. vSphere Problem Detector Operator Purpose The vSphere Problem Detector Operator checks clusters that are deployed on vSphere for common installation and misconfiguration issues that are related to storage. Note The vSphere Problem Detector Operator is only started by the Cluster Storage Operator when the Cluster Storage Operator detects that the cluster is deployed on vSphere. Configuration No configuration is required. Notes The Operator supports OpenShift Container Platform installations on vSphere. The Operator uses the vsphere-cloud-credentials to communicate with vSphere. The Operator performs checks that are related to storage. Additional resources For more details, see Using the vSphere Problem Detector Operator .
[ "oc -n [namespace] edit cm hw-event-proxy-operator-manager-config", "apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 kind: ControllerManagerConfig health: healthProbeBindAddress: :8081 metrics: bindAddress: 127.0.0.1:8080 webhook: port: 9443 leaderElection: leaderElect: true resourceName: 6e7a703c.redhat-cne.org", "oc get clusteroperator authentication -o yaml", "oc -n openshift-monitoring edit cm cluster-monitoring-config", "oc edit etcd cluster", "oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml", "oc get deployment -n openshift-ingress", "oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'", "map[cidr:10.128.0.0/14 hostPrefix:23]", "oc edit kubeapiserver", "oc get clusteroperator openshift-controller-manager -o yaml", "oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operators/cluster-operators-ref
Chapter 1. RHEL image builder description
Chapter 1. RHEL image builder description To deploy a system, create a system image. To create RHEL system images, use the RHEL image builder tool. You can use RHEL image builder to create customized system images of RHEL, including system images prepared for deployment on cloud platforms. RHEL image builder automatically handles the setup details for each output type and is therefore easier to use and faster to work with than manual methods of image creation. You can access the RHEL image builder functionalities by using the command line in the composer-cli tool, or the graphical user interface in the RHEL web console. Note From RHEL 8.3 onward, the osbuild-composer back end replaces lorax-composer . The new service provides REST APIs for image building. 1.1. RHEL image builder terminology RHEL image builder uses the following concepts: Blueprint A blueprint is a description of a customized system image. It lists the packages and customizations that will be part of the system. You can edit blueprints with customizations and save them as a particular version. When you create a system image from a blueprint, the image is associated with the blueprint in the RHEL image builder interface. Create blueprints in the TOML format. Compose Composes are individual builds of a system image, based on a specific version of a particular blueprint. Compose as a term refers to the system image, the logs from its creation, inputs, metadata, and the process itself. Customizations Customizations are specifications for the image that are not packages. This includes users, groups, and SSH keys. 1.2. RHEL image builder output formats RHEL image builder can create images in multiple output formats shown in the following table. Table 1.1. RHEL image builder output formats Description CLI name File extension QEMU Image qcow2 .qcow2 Disk Archive tar .tar Amazon Web Services raw .raw Microsoft Azure vhd .vhd Google Cloud Platform gce .tar.gz VMware vSphere vmdk .vmdk VMware vSphere ova .ova Openstack openstack .qcow2 RHEL for Edge Commit edge-commit .tar RHEL for Edge Container edge-container .tar RHEL for Edge Installer edge-installer .iso RHEL for Edge Raw Image edge-raw-image .raw.xz RHEL for Edge Simplified Installer edge-simplified-installer .iso RHEL for Edge AMI edge-ami .ami RHEL for Edge VMDK edge-vsphere .vmdk RHEL Installer image-installer .iso Oracle Cloud Infrastructure .oci .qcow2 To check the supported types, run the command: 1.3. Supported architectures for image builds RHEL image builder supports building images for the following architectures: AMD and Intel 64-bit ( x86_64 ) ARM64 ( aarch64 ) IBM Z ( s390x ) IBM POWER systems However, RHEL image builder does not support multi-architecture builds. It only builds images of the same system architecture that it is running on. For example, if RHEL image builder is running on an x86_64 system, it can only build images for the x86_64 architecture. 1.4. Additional resources RHEL image builder additional documentation index
[ "composer-cli compose types" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_a_customized_rhel_system_image/composer-description_composing-a-customized-rhel-system-image
Chapter 24. Using Ansible playbooks to manage self-service rules in IdM
Chapter 24. Using Ansible playbooks to manage self-service rules in IdM This section introduces self-service rules in Identity Management (IdM) and describes how to create and edit self-service access rules using Ansible playbooks. Self-service access control rules allow an IdM entity to perform specified operations on its IdM Directory Server entry. Self-service access control in IdM Using Ansible to ensure that a self-service rule is present Using Ansible to ensure that a self-service rule is absent Using Ansible to ensure that a self-service rule has specific attributes Using Ansible to ensure that a self-service rule does not have specific attributes 24.1. Self-service access control in IdM Self-service access control rules define which operations an Identity Management (IdM) entity can perform on its IdM Directory Server entry: for example, IdM users have the ability to update their own passwords. This method of control allows an authenticated IdM entity to edit specific attributes within its LDAP entry, but does not allow add or delete operations on the entire entry. Warning Be careful when working with self-service access control rules: configuring access control rules improperly can inadvertently elevate an entity's privileges. 24.2. Using Ansible to ensure that a self-service rule is present The following procedure describes how to use an Ansible playbook to define self-service rules and ensure their presence on an Identity Management (IdM) server. In this example, the new Users can manage their own name details rule grants users the ability to change their own givenname , displayname , title and initials attributes. This allows them to, for example, change their display name or initials if they want to. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-present.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-present-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the new self-service rule. Set the permission variable to a comma-separated list of permissions to grant: read and write . Set the attribute variable to a list of attributes that users can manage themselves: givenname , displayname , title , and initials . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file in the /usr/share/doc/ansible-freeipa/ directory The /usr/share/doc/ansible-freeipa/playbooks/selfservice directory 24.3. Using Ansible to ensure that a self-service rule is absent The following procedure describes how to use an Ansible playbook to ensure a specified self-service rule is absent from your IdM configuration. The example below describes how to make sure the Users can manage their own name details self-service rule does not exist in IdM. This will ensure that users cannot, for example, change their own display name or initials. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-absent.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-absent-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the self-service rule. Set the state variable to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/selfservice directory 24.4. Using Ansible to ensure that a self-service rule has specific attributes The following procedure describes how to use an Ansible playbook to ensure that an already existing self-service rule has specific settings. In the example, you ensure the Users can manage their own name details self-service rule also has the surname member attribute. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The Users can manage their own name details self-service rule exists in IdM. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-member-present.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-member-present-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the self-service rule to modify. Set the attribute variable to surname . Set the action variable to member . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file available in the /usr/share/doc/ansible-freeipa/ directory The sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/selfservice directory 24.5. Using Ansible to ensure that a self-service rule does not have specific attributes The following procedure describes how to use an Ansible playbook to ensure that a self-service rule does not have specific settings. You can use this playbook to make sure a self-service rule does not grant undesired access. In the example, you ensure the Users can manage their own name details self-service rule does not have the givenname and surname member attributes. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The Users can manage their own name details self-service rule exists in IdM. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-member-absent.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-member-absent-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the self-service rule you want to modify. Set the attribute variable to givenname and surname . Set the action variable to member . Set the state variable to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file in the /usr/share/doc/ansible-freeipa/ directory The sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/selfservice directory
[ "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-present.yml selfservice-present-copy.yml", "--- - name: Self-service present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure self-service rule \"Users can manage their own name details\" is present ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" permission: read, write attribute: - givenname - displayname - title - initials", "ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-present-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-absent.yml selfservice-absent-copy.yml", "--- - name: Self-service absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure self-service rule \"Users can manage their own name details\" is absent ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" state: absent", "ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-absent-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-member-present.yml selfservice-member-present-copy.yml", "--- - name: Self-service member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure selfservice \"Users can manage their own name details\" member attribute surname is present ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" attribute: - surname action: member", "ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-member-present-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-member-absent.yml selfservice-member-absent-copy.yml", "--- - name: Self-service member absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure selfservice \"Users can manage their own name details\" member attributes givenname and surname are absent ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" attribute: - givenname - surname action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-member-absent-copy.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/using-ansible-playbooks-to-manage-self-service-rules-in-idm_configuring-and-managing-idm
Chapter 1. About networking
Chapter 1. About networking Red Hat OpenShift Networking is an ecosystem of features, plugins and advanced networking capabilities that extend Kubernetes networking with the advanced networking-related features that your cluster needs to manage its network traffic for one or multiple hybrid clusters. This ecosystem of networking capabilities integrates ingress, egress, load balancing, high-performance throughput, security, inter- and intra-cluster traffic management and provides role-based observability tooling to reduce its natural complexities. The following list highlights some of the most commonly used Red Hat OpenShift Networking features available on your cluster: Primary cluster network provided by either of the following Container Network Interface (CNI) plugins: OVN-Kubernetes network plugin , the default plugin About the OVN-Kubernetes network plugin Certified 3rd-party alternative primary network plugins Cluster Network Operator for network plugin management Ingress Operator for TLS encrypted web traffic DNS Operator for name assignment MetalLB Operator for traffic load balancing on bare metal clusters IP failover support for high-availability Additional hardware network support through multiple CNI plugins, including for macvlan, ipvlan, and SR-IOV hardware networks IPv4, IPv6, and dual stack addressing Hybrid Linux-Windows host clusters for Windows-based workloads Red Hat OpenShift Service Mesh for discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring of services Single-node OpenShift Network Observability Operator for network debugging and insights Submariner for inter-cluster networking Red Hat Service Interconnect for layer 7 inter-cluster networking
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/networking/about-networking
Chapter 54. Data sets authoring
Chapter 54. Data sets authoring A data set is a collection of related sets of information and can be stored in a database, in a Microsoft Excel file, or in memory. A data set definition instructs Business Central methods to access, read, and parse a data set. Business Central does not store data. It enables you to define access to a data set regardless of where the data is stored. For example, if data is stored in a database, a valid data set can contain the entire database or a subset of the database as a result of an SQL query. In both cases the data is used as input for the reporting components of Business Central which then displays the information. To access a data set, you must create and register a data set definition. The data set definition specifies the location of the data set, options to access it, read it, and parse it, and the columns that it contains. Note The Data Sets page is visible only to users with the admin role. 54.1. Adding data sets You can create a data set to fetch data from an external data source and use that data for the reporting components. Procedure In Business Central, go to Admin Data Sets . The Data Sets page opens. Click New Data Set and select one of the following provider types: Bean: Generates a data set from a Java class CSV: Generates a data set from a remote or local CSV file SQL: Generates a data set from an ANSI-SQL compliant database Elastic Search: Generates a data set from Elastic Search nodes Prometheus: Generates a data set using the Prometheus query Kafka: Generates a data set using metrics from Kafka broker, consumer, or producer Note You must configure KIE Server for Prometheus , Kafka , and Execution Server options. Complete the Data Set Creation Wizard and click Test . Note The configuration steps differ based on the provider you choose. Click Save . 54.2. Editing data sets You can edit existing data sets to ensure that the data fetched to the reporting components is up-to-date. Procedure In Business Central, go to Admin Data Sets . The Data Set Explorer page opens. In the Data Set Explorer pane, search for the data set you want to edit, select the data set, and click Edit . In the Data Set Editor pane, use the appropriate tab to edit the data as required. The tabs differ based on the data set provider type you chose. For example, the following changes are applicable for editing a CSV data provider: CSV Configuration: Enables you to change the name of the data set definition, the source file, the separator, and other properties. Preview: Enables you to preview the data. After you click Test in the CSV Configuration tab, the system executes the data set lookup call and if the data is available, a preview appears. Note that the Preview tab has two sub-tabs: Data columns: Enables you to specify what columns are part of your data set definition. Filter: Enables you to add a new filter. Advanced: Enables you to manage the following configurations: Caching: See Caching data for more information. Cache life-cycle Enables you to specify an interval of time after which a data set (or data) is refreshed. The Refresh on stale data feature refreshes the cached data when the back-end data changes. After making the required changes, click Validate . Click Save . 54.3. Data refresh The data refresh feature enables you to specify an interval of time after which a data set (or data) is refreshed. You can access the Data refresh every feature on the Advanced tab of the data set. The Refresh on stale data feature refreshes the cached data when the back-end data changes. 54.4. Caching data Business Central provides caching mechanisms for storing data sets and performing data operations using in-memory data. Caching data reduces network traffic, remote system payload, and processing time. To avoid performance issues, configure the cache settings in Business Central. For any data lookup call that results in a data set, the caching method determines where the data lookup call is executed and where the resulting data set is stored. An example of a data lookup call would be all the mortgage applications whose locale parameter is set as "Urban". Business Central data set functionality provides two cache levels: Client level Back-end level You can set the Client Cache and Backend Cache settings on the Advanced tab of the data set. Client cache When the cache is turned on, the data set is cached in a web browser during the lookup operation and further lookup operations do not perform requests to the back-end. Data set operations like grouping, aggregations, filtering, and sorting are processed in the web browser. Enable client caching only if the data set size is small, for example, for data sets with less than 10 MB of data. For large data sets, browser issues such as slow performance or intermittent freezing can occur. Client caching reduces the number of back-end requests including requests to the storage system. Back-end cache When the cache is enabled, the decision engine caches the data set. This reduces the number of back-end requests to the remote storage system. All data set operations are performed in the decision engine using in-memory data. Enable back-end caching only if the data set size is not updated frequently and it can be stored and processed in memory. Using back-end caching is also useful in cases with low latency connectivity issues with the remote storage. Note Back-end cache settings are not always visible in the Advanced tab of the Data Set Editor because Java and CSV data providers rely on back-end caching (data set must be in the memory) in order to resolve any data lookup operation using the in-memory decision engine. 54.5. KIE Server data sets with Dashbuilder Runtime and Dashbuilder Standalone A data set is a collection of related information. If you have a KIE Server that contains imported data sets, you can use Dashbuilder Runtime or Dashbuilder Standalone and the KIE Server REST API to run queries on imported data sets. Because KIE Server uses Business Central as a controller, KIE Server containers are created in Business Central. Data sets are also created in Business Central. The KIE Server configuration is a template that you can refer to when you create data sets or install containers. Other services, such as Dashbuilder Runtime and Dashbuilder Standalone, use the KIE Server REST API to retrieve KIE Server information. Dashbuilder Runtime and Dashbuilder Standalone access the KIE Server REST API to run queries from data sets. When a KIE Server data set is created in Business Central, the server template information is provided and it is used by Dashbuilder Runtime and Dashbuilder Standalone to look for the KIE Server information. For example: You can also setup KIE Server for each data set. For example: Note Token authentication is not used if credentials are provided. You might want to run the dashboard against another KIE Server installation. When data sets are created on a KIE Server in a development environment, the data sets queries are created on the development KIE Server, for example DEV. If a dashboard is exported to a production environment, for example PROD, with a different KIE Server, the queries that you created in DEV are not available, so an error is thrown. In this case it is possible to port queries from a data set to another KIE Server by using the replace query functionality, either through a server template or a data set: Server template example: Data set example: The replace_query=true property only needs to be set once so that Dashbuilder Runtime or Dashbuilder Standalone creates the queries. After the queries are created you can remove this system property. Additional resources Interacting with Red Hat Process Automation Manager using KIE APIs
[ "dashbuilder.kieserver.serverTemplate.{SERVER_TEMPLATE}.location={LOCATION} dashbuilder.kieserver.serverTemplate.{SERVER_TEMPLATE}.user={USER} dashbuilder.kieserver.serverTemplate.{SERVER_TEMPLATE}.password={PASSWORD} dashbuilder.kieserver.serverTemplate.{SERVER_TEMPLATE}.token={TOKEN}", "dashbuilder.kieserver.dataset.{DATA_SET_NAME}.location={LOCATION} dashbuilder.kieserver.dataset.{DATA_SET_NAME}.user={USER} dashbuilder.kieserver.dataset.{DATA_SET_NAME}.password={PASSWORD} dashbuilder.kieserver.dataset.{DATA_SET_NAME}.token={TOKEN}", "dashbuilder.kieserver.serverTemplate.{SERVER_TEMPLATE}.replace_query=true", "dashbuilder.kieserver.dataset.{DATA_SET_NAME}.replace_query=true" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/data-sets-authoring-con_creating-custom-pages
8.4.8. Adding the Optional and Supplementary Repositories
8.4.8. Adding the Optional and Supplementary Repositories Optional and Supplementary subscription channels provide additional software packages for Red Hat Enterprise Linux that cover open source licensed software (in the Optional channel) and proprietary licensed software (in the Supplementary channel). Before subscribing to the Optional and Supplementary channels see the Scope of Coverage Details . If you decide to install packages from these channels, follow the steps documented in the article called How to access Optional and Supplementary channels, and -devel packages using Red Hat Subscription Manager (RHSM)? on the Red Hat Customer Portal.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-adding_the_optional_and_supplementary_repositories
Chapter 3. CredentialsRequest [cloudcredential.openshift.io/v1]
Chapter 3. CredentialsRequest [cloudcredential.openshift.io/v1] Description CredentialsRequest is the Schema for the credentialsrequests API Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CredentialsRequestSpec defines the desired state of CredentialsRequest status object CredentialsRequestStatus defines the observed state of CredentialsRequest 3.1.1. .spec Description CredentialsRequestSpec defines the desired state of CredentialsRequest Type object Required secretRef Property Type Description providerSpec `` ProviderSpec contains the cloud provider specific credentials specification. secretRef object SecretRef points to the secret where the credentials should be stored once generated. serviceAccountNames array (string) ServiceAccountNames contains a list of ServiceAccounts that will use permissions associated with this CredentialsRequest. This is not used by CCO, but the information is needed for being able to properly set up access control in the cloud provider when the ServiceAccounts are used as part of the cloud credentials flow. 3.1.2. .spec.secretRef Description SecretRef points to the secret where the credentials should be stored once generated. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 3.1.3. .status Description CredentialsRequestStatus defines the observed state of CredentialsRequest Type object Required lastSyncGeneration provisioned Property Type Description conditions array Conditions includes detailed status for the CredentialsRequest conditions[] object CredentialsRequestCondition contains details for any of the conditions on a CredentialsRequest object lastSyncCloudCredsSecretResourceVersion string LastSyncCloudCredsSecretResourceVersion is the resource version of the cloud credentials secret resource when the credentials request resource was last synced. Used to determine if the the cloud credentials have been updated since the last sync. lastSyncGeneration integer LastSyncGeneration is the generation of the credentials request resource that was last synced. Used to determine if the object has changed and requires a sync. lastSyncTimestamp string LastSyncTimestamp is the time that the credentials were last synced. providerStatus `` ProviderStatus contains cloud provider specific status. provisioned boolean Provisioned is true once the credentials have been initially provisioned. 3.1.4. .status.conditions Description Conditions includes detailed status for the CredentialsRequest Type array 3.1.5. .status.conditions[] Description CredentialsRequestCondition contains details for any of the conditions on a CredentialsRequest object Type object Required status type Property Type Description lastProbeTime string LastProbeTime is the last time we probed the condition lastTransitionTime string LastTransitionTime is the last time the condition transitioned from one status to another. message string Message is a human-readable message indicating details about the last transition reason string Reason is a unique, one-word, CamelCase reason for the condition's last transition status string Status is the status of the condition type string Type is the specific type of the condition 3.2. API endpoints The following API endpoints are available: /apis/cloudcredential.openshift.io/v1/credentialsrequests GET : list objects of kind CredentialsRequest /apis/cloudcredential.openshift.io/v1/namespaces/{namespace}/credentialsrequests DELETE : delete collection of CredentialsRequest GET : list objects of kind CredentialsRequest POST : create a CredentialsRequest /apis/cloudcredential.openshift.io/v1/namespaces/{namespace}/credentialsrequests/{name} DELETE : delete a CredentialsRequest GET : read the specified CredentialsRequest PATCH : partially update the specified CredentialsRequest PUT : replace the specified CredentialsRequest /apis/cloudcredential.openshift.io/v1/namespaces/{namespace}/credentialsrequests/{name}/status GET : read status of the specified CredentialsRequest PATCH : partially update status of the specified CredentialsRequest PUT : replace status of the specified CredentialsRequest 3.2.1. /apis/cloudcredential.openshift.io/v1/credentialsrequests Table 3.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind CredentialsRequest Table 3.2. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequestList schema 401 - Unauthorized Empty 3.2.2. /apis/cloudcredential.openshift.io/v1/namespaces/{namespace}/credentialsrequests Table 3.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CredentialsRequest Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind CredentialsRequest Table 3.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.8. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequestList schema 401 - Unauthorized Empty HTTP method POST Description create a CredentialsRequest Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.10. Body parameters Parameter Type Description body CredentialsRequest schema Table 3.11. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequest schema 201 - Created CredentialsRequest schema 202 - Accepted CredentialsRequest schema 401 - Unauthorized Empty 3.2.3. /apis/cloudcredential.openshift.io/v1/namespaces/{namespace}/credentialsrequests/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the CredentialsRequest namespace string object name and auth scope, such as for teams and projects Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CredentialsRequest Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CredentialsRequest Table 3.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.18. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequest schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CredentialsRequest Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body Patch schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequest schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CredentialsRequest Table 3.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.23. Body parameters Parameter Type Description body CredentialsRequest schema Table 3.24. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequest schema 201 - Created CredentialsRequest schema 401 - Unauthorized Empty 3.2.4. /apis/cloudcredential.openshift.io/v1/namespaces/{namespace}/credentialsrequests/{name}/status Table 3.25. Global path parameters Parameter Type Description name string name of the CredentialsRequest namespace string object name and auth scope, such as for teams and projects Table 3.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified CredentialsRequest Table 3.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.28. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequest schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CredentialsRequest Table 3.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.30. Body parameters Parameter Type Description body Patch schema Table 3.31. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequest schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CredentialsRequest Table 3.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.33. Body parameters Parameter Type Description body CredentialsRequest schema Table 3.34. HTTP responses HTTP code Reponse body 200 - OK CredentialsRequest schema 201 - Created CredentialsRequest schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/security_apis/credentialsrequest-cloudcredential-openshift-io-v1
C.5.2. Runlevel 5
C.5.2. Runlevel 5 When the system boots into runlevel 5, a special X client application called a display manager is launched. A user must authenticate using the display manager before any desktop environment or window managers are launched. Depending on the desktop environments installed on the system, three different display managers are available to handle user authentication. GDM (GNOME Display Manager) - The default display manager for Red Hat Enterprise Linux. GNOME allows the user to configure language settings, shutdown, restart or log in to the system. KDM - KDE's display manager which allows the user to shutdown, restart or log in to the system. xdm (X Window Display Manager) - A very basic display manager which only lets the user log in to the system. When booting into runlevel 5, the /etc/X11/prefdm script determines the preferred display manager by referencing the /etc/sysconfig/desktop file. A list of options for this file is available in this file: where <version-number> is the version number of the initscripts package. Each of the display managers reference the /etc/X11/xdm/Xsetup_0 file to set up the login screen. Once the user logs into the system, the /etc/X11/xdm/GiveConsole script runs to assign ownership of the console to the user. Then, the /etc/X11/xdm/Xsession script runs to accomplish many of the tasks normally performed by the xinitrc script when starting X from runlevel 3, including setting system and user resources, as well as running the scripts in the /etc/X11/xinit/xinitrc.d/ directory. Users can specify which desktop environment they want to use when they authenticate using the GNOME or KDE display managers by selecting it from the Sessions menu item accessed by selecting System Preferences More Preferences Sessions . If the desktop environment is not specified in the display manager, the /etc/X11/xdm/Xsession script checks the .xsession and .Xclients files in the user's home directory to decide which desktop environment to load. As a last resort, the /etc/X11/xinit/Xclients file is used to select a desktop environment or window manager to use in the same way as runlevel 3. When the user finishes an X session on the default display ( :0 ) and logs out, the /etc/X11/xdm/TakeConsole script runs and reassigns ownership of the console to the root user. The original display manager, which continues running after the user logged in, takes control by spawning a new display manager. This restarts the X server, displays a new login window, and starts the entire process over again. The user is returned to the display manager after logging out of X from runlevel 5. For more information on how display managers control user authentication, see the /usr/share/doc/gdm- <version-number> /README , where <version-number> is the version number for the gdm package installed, or the xdm man page.
[ "/usr/share/doc/initscripts- <version-number> /sysconfig.txt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-x-runlevels-5
17.12. Attaching a Virtual NIC Directly to a Physical Interface
17.12. Attaching a Virtual NIC Directly to a Physical Interface As an alternative to the default NAT connection, you can use the macvtap driver to attach the guest's NIC directly to a specified physical interface of the host machine. This is not to be confused with device assignment (also known as passthrough). Macvtap connection has the following modes, each with different benefits and usecases: Physical interface delivery modes VEPA In virtual ethernet port aggregator (VEPA) mode, all packets from the guests are sent to the external switch. This enables the user to force guest traffic through the switch. For VEPA mode to work correctly, the external switch must also support hairpin mode , which ensures that packets whose destination is a guest on the same host machine as their source guest are sent back to the host by the external switch. Figure 17.23. VEPA mode bridge Packets whose destination is on the same host machine as their source guest are directly delivered to the target macvtap device. Both the source device and the destination device need to be in bridge mode for direct delivery to succeed. If either one of the devices is in VEPA mode, a hairpin-capable external switch is required. Figure 17.24. Bridge mode private All packets are sent to the external switch and will only be delivered to a target guest on the same host machine if they are sent through an external router or gateway and these send them back to the host. Private mode can be used to prevent the individual guests on the single host from communicating with each other. This procedure is followed if either the source or destination device is in private mode. Figure 17.25. Private mode passthrough This feature attaches a physical interface device or a SR-IOV Virtual Function (VF) directly to a guest without losing the migration capability. All packets are sent directly to the designated network device. Note that a single network device can only be passed through to a single guest, as a network device cannot be shared between guests in passthrough mode. Figure 17.26. Passthrough mode Macvtap can be configured by changing the domain XML file or by using the virt-manager interface. 17.12.1. Configuring macvtap using domain XML Open the domain XML file of the guest and modify the <devices> element as follows: The network access of direct attached guest virtual machines can be managed by the hardware switch to which the physical interface of the host physical machine is connected. The interface can have additional parameters as shown below, if the switch is conforming to the IEEE 802.1Qbg standard. The parameters of the virtualport element are documented in more detail in the IEEE 802.1Qbg standard. The values are network specific and should be provided by the network administrator. In 802.1Qbg terms, the Virtual Station Interface (VSI) represents the virtual interface of a virtual machine. Also note that IEEE 802.1Qbg requires a non-zero value for the VLAN ID. Virtual Station Interface types managerid The VSI Manager ID identifies the database containing the VSI type and instance definitions. This is an integer value and the value 0 is reserved. typeid The VSI Type ID identifies a VSI type characterizing the network access. VSI types are typically managed by network administrator. This is an integer value. typeidversion The VSI Type Version allows multiple versions of a VSI Type. This is an integer value. instanceid The VSI Instance ID is generated when a VSI instance (a virtual interface of a virtual machine) is created. This is a globally unique identifier. profileid The profile ID contains the name of the port profile that is to be applied onto this interface. This name is resolved by the port profile database into the network parameters from the port profile, and those network parameters will be applied to this interface. Each of the four types is configured by changing the domain XML file. Once this file is opened, change the mode setting as shown: The profile ID is shown here: 17.12.2. Configuring macvtap using virt-manager Open the virtual hardware details window ⇒ select NIC in the menu ⇒ for Network source , select host device name : macvtap ⇒ select the intended Source mode . The virtual station interface types can then be set up in the Virtual port submenu. Figure 17.27. Configuring macvtap in virt-manager
[ "<devices> <interface type='direct'> <source dev='eth0' mode='vepa'/> </interface> </devices>", "<devices> <interface type='direct'> <source dev='eth0.2' mode='vepa'/> <virtualport type=\"802.1Qbg\"> <parameters managerid=\"11\" typeid=\"1193047\" typeidversion=\"2\" instanceid=\"09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f\"/> </virtualport> </interface> </devices>", "<devices> <interface type='direct'> <source dev='eth0' mode='private'/> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-directly_attaching_to_physical_interface
Chapter 5. Creating an AWS PrivateLink cluster on ROSA
Chapter 5. Creating an AWS PrivateLink cluster on ROSA This document describes how to create a ROSA cluster using AWS PrivateLink. 5.1. Understanding AWS PrivateLink A Red Hat OpenShift Service on AWS cluster can be created without any requirements on public subnets, internet gateways, or network address translation (NAT) gateways. In this configuration, Red Hat uses AWS PrivateLink to manage and monitor a cluster to avoid all public ingress network traffic. Without a public subnet, it is not possible to configure an application router as public. Configuring private application routers is the only option. For more information, see AWS PrivateLink on the AWS website. Important You can only make a PrivateLink cluster at installation time. You cannot change a cluster to PrivateLink after installation. 5.2. Requirements for using AWS PrivateLink clusters For AWS PrivateLink clusters, internet gateways, NAT gateways, and public subnets are not required, but the private subnets must have internet connectivity provided to install required components. At least one single private subnet is required for Single-AZ clusters and at least 3 private subnets are required for Multi-AZ clusters. The following table shows the AWS resources that are required for a successful installation: Table 5.1. Required AWS resources Component AWS Type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a VPC for the cluster to use. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow access to the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024-65535 Inbound ephemeral traffic 0-65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC must have private subnets in 1 availability zone for Single-AZ deployments or 3 availability zones for Multi-AZ deployments. You must provide appropriate routes and route tables. 5.3. Creating an AWS PrivateLink cluster You can create an AWS PrivateLink cluster using the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . Note AWS PrivateLink is supported on existing VPCs only. Prerequisites You have available AWS service quotas. You have enabled the ROSA service in the AWS Console. You have installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , on your installation host. Procedure Creating a cluster can take up to 40 minutes. With AWS PrivateLink, you can create a cluster with a single availability zone (Single-AZ) or multiple availability zones (Multi-AZ). In either case, your machine's classless inter-domain routing (CIDR) must match your virtual private cloud's CIDR. See Requirements for using your own VPC and VPC Validation for more information. Important If you use a firewall, you must configure it so that Red Hat OpenShift Service on AWS can access the sites that it requires to function. For more information, see the AWS PrivateLink firewall prerequisites section. Note If your cluster name is longer than 15 characters, it will contain an autogenerated domain prefix as a sub-domain for your provisioned cluster on *.openshiftapps.com . To customize the subdomain, use the --domain-prefix flag. The domain prefix cannot be longer than 15 characters, must be unique, and cannot be changed after cluster creation. To create a Single-AZ cluster: USD rosa create cluster --private-link --cluster-name=<cluster-name> [--machine-cidr=<VPC CIDR>/16] --subnet-ids=<private-subnet-id> To create a Multi-AZ cluster: USD rosa create cluster --private-link --multi-az --cluster-name=<cluster-name> [--machine-cidr=<VPC CIDR>/16] --subnet-ids=<private-subnet-id1>,<private-subnet-id2>,<private-subnet-id3> Enter the following command to check the status of your cluster. During cluster creation, the State field from the output will transition from pending to installing , and finally to ready . USD rosa describe cluster --cluster=<cluster_name> Note If installation fails or the State field does not change to ready after 40 minutes, check the installation troubleshooting documentation for more details. Enter the following command to follow the OpenShift installer logs to track the progress of your cluster: USD rosa logs install --cluster=<cluster_name> --watch 5.4. Configuring AWS PrivateLink DNS forwarding With AWS PrivateLink clusters, a public hosted zone and a private hosted zone are created in Route 53. With the private hosted zone, records within the zone are resolvable only from within the VPC to which it is assigned. The Let's Encrypt DNS-01 validation requires a public zone so that valid, publicly trusted certificates can be issued for the domain. The validation records are deleted after Let's Encrypt validation is complete; however, the zone is still required for issuing and renewing these certificates, which are typically required every 60 days. While these zones usually appear empty, it is serving a critical role in the validation process. For more information about private hosted zones, see AWS private hosted zones documentation . For more information about public hosted zones, see AWS public hosted zones documentation . Prerequisites Your corporate network or other VPC has connectivity UDP port 53 and TCP port 53 ARE enabled across your networks to allow for DNS queries You have created an AWS PrivateLink cluster using Red Hat OpenShift Service on AWS Procedure To allow for records such as api.<cluster_domain> and *.apps.<cluster_domain> to resolve outside of the VPC, configure a Route 53 Resolver Inbound Endpoint . When you configure the inbound endpoint, select the VPC and private subnets that were used when you created the cluster. After the endpoints are operational and associated, configure your corporate network to forward DNS queries to those IP addresses for the top-level cluster domain, such as drow-pl-01.htno.p1.openshiftapps.com . If you are forwarding DNS queries from one VPC to another VPC, configure forwarding rules . If you are configuring your remote network DNS server, see your specific DNS server documentation to configure selective DNS forwarding for the installed cluster domain. 5.5. steps Configure identity providers Adding notification contacts 5.6. Additional resources AWS PrivateLink firewall prerequisites Overview of the ROSA with STS deployment workflow Deleting a ROSA cluster ROSA architecture models
[ "rosa create cluster --private-link --cluster-name=<cluster-name> [--machine-cidr=<VPC CIDR>/16] --subnet-ids=<private-subnet-id>", "rosa create cluster --private-link --multi-az --cluster-name=<cluster-name> [--machine-cidr=<VPC CIDR>/16] --subnet-ids=<private-subnet-id1>,<private-subnet-id2>,<private-subnet-id3>", "rosa describe cluster --cluster=<cluster_name>", "rosa logs install --cluster=<cluster_name> --watch" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/install_rosa_classic_clusters/rosa-aws-privatelink-creating-cluster
Chapter 2. Downloading log files and diagnostic information using must-gather
Chapter 2. Downloading log files and diagnostic information using must-gather If Red Hat OpenShift Data Foundation is unable to automatically resolve a problem, use the must-gather tool to collect log files and diagnostic information so that you or Red Hat support can review the problem and determine a solution. Important When Red Hat OpenShift Data Foundation is deployed in external mode, must-gather only collects logs from the OpenShift Data Foundation cluster and does not collect debug data and logs from the external Red Hat Ceph Storage cluster. To collect debug logs from the external Red Hat Ceph Storage cluster, see Red Hat Ceph Storage Troubleshooting guide and contact your Red Hat Ceph Storage Administrator. Prerequisites Optional: If OpenShift Data Foundation is deployed in a disconnected environment, ensure that you mirror the individual must-gather image to the mirror registry available from the disconnected environment. <local-registry> Is the local image mirror registry available for a disconnected OpenShift Container Platform cluster. <path-to-the-registry-config> Is the path to your registry credentials, by default it is ~/.docker/config.json . --insecure Add this flag only if the mirror registry is insecure. For more information, see the Red Hat Knowledgebase solutions: How to mirror images between Redhat Openshift registries Failed to mirror OpenShift image repository when private registry is insecure Procedure Run the must-gather command from the client connected to the OpenShift Data Foundation cluster: <directory-name> Is the name of the directory where you want to write the data to. Important For a disconnected environment deployment, replace the image in --image parameter with the mirrored must-gather image. <local-registry> Is the local image mirror registry available for a disconnected OpenShift Container Platform cluster. This collects the following information in the specified directory: All Red Hat OpenShift Data Foundation cluster related Custom Resources (CRs) with their namespaces. Pod logs of all the Red Hat OpenShift Data Foundation related pods. Output of some standard Ceph commands like Status, Cluster health, and others. 2.1. Variations of must-gather-commands If one or more master nodes are not in the Ready state, use --node-name to provide a master node that is Ready so that the must-gather pod can be safely scheduled. If you want to gather information from a specific time: To specify a relative time period for logs gathered, such as within 5 seconds or 2 days, add /usr/bin/gather since=<duration> : To specify a specific time to gather logs after, add /usr/bin/gather since-time=<rfc3339-timestamp> : Replace the example values in these commands as follows: <node-name> If one or more master nodes are not in the Ready state, use this parameter to provide the name of a master node that is still in the Ready state. This avoids scheduling errors by ensuring that the must-gather pod is not scheduled on a master node that is not ready. <directory-name> The directory to store information collected by must-gather . <duration> Specify the period of time to collect information from as a relative duration, for example, 5h (starting from 5 hours ago). <rfc3339-timestamp> Specify the period of time to collect information from as an RFC 3339 timestamp, for example, 2020-11-10T04:00:00+00:00 (starting from 4 am UTC on 11 Nov 2020). 2.2. Running must-gather in modular mode Red Hat OpenShift Data Foundation must-gather can take a long time to run in some environments. To avoid this, run must-gather in modular mode and collect only the resources you require using the following command: Replace < -arg> with one or more of the following arguments to specify the resources for which the must-gather logs is required. -o , --odf ODF logs (includes Ceph resources, namespaced resources, clusterscoped resources and Ceph logs) -d , --dr DR logs -n , --noobaa Noobaa logs -c , --ceph Ceph commands and pod logs -cl , --ceph-logs Ceph daemon, kernel, journal logs, and crash reports -ns , --namespaced namespaced resources -cs , --clusterscoped clusterscoped resources -h , --help Print help message
[ "oc image mirror registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 <local-registry> /odf4/odf-must-gather-rhel9:v4.15 [--registry-config= <path-to-the-registry-config> ] [--insecure=true]", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir= <directory-name>", "oc adm must-gather --image=<local-registry>/odf4/odf-must-gather-rhel9:v4.15 --dest-dir= <directory-name>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ --node-name=_<node-name>_", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ /usr/bin/gather since=<duration>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ /usr/bin/gather since-time=<rfc3339-timestamp>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 -- /usr/bin/gather <-arg>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/troubleshooting_openshift_data_foundation/downloading-log-files-and-diagnostic-information_rhodf
Getting started with automation controller
Getting started with automation controller Red Hat Ansible Automation Platform 2.4 Getting started guide for automation controller Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html-single/getting_started_with_automation_controller/index
4.7. Kernel
4.7. Kernel Thin-provisioning and scalable snapshot capabilities The dm-thinp targets, thin and thin-pool , provide a device mapper device with thin-provisioning and scalable snapshot capabilities. This feature is available as a Technology Preview. Package: kernel-2.6.32-358 Kernel Media support The following features are presented as Technology Previews: The latest upstream video4linux Digital video broadcasting Primarily infrared remote control device support Various webcam support fixes and improvements Package: kernel-2.6.32-358 Remote audit logging The audit package contains the user space utilities for storing and searching the audit records generated by the audit subsystem in the Linux 2.6 kernel. Within the audispd-plugins sub-package is a utility that allows for the transmission of audit events to a remote aggregating machine. This remote audit logging application, audisp-remote , is considered a Technology Preview in Red Hat Enterprise Linux 6. Package: audispd-plugins-2.2-2 Linux (NameSpace) Container [LXC] Linux containers provide a flexible approach to application runtime containment on bare-metal systems without the need to fully virtualize the workload. Red Hat Enterprise Linux 6 provides application level containers to separate and control the application resource usage policies via cgroups and namespaces. This release includes basic management of container life-cycle by allowing creation, editing and deletion of containers via the libvirt API and the virt-manager GUI. Linux Containers are a Technology Preview. Packages: libvirt-0.9.10-21 , virt-manager-0.9.0-14 Diagnostic pulse for the fence_ipmilan agent, BZ# 655764 A diagnostic pulse can now be issued on the IPMI interface using the fence_ipmilan agent. This new Technology Preview is used to force a kernel dump of a host if the host is configured to do so. Note that this feature is not a substitute for the off operation in a production cluster. Package: fence-agents-3.1.5-25
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/kernel_tp
Chapter 12. Configuring IP failover
Chapter 12. Configuring IP failover This topic describes configuring IP failover for pods and services on your OpenShift Container Platform cluster. IP failover uses Keepalived to host a set of externally accessible Virtual IP (VIP) addresses on a set of hosts. Each VIP address is only serviced by a single host at a time. Keepalived uses the Virtual Router Redundancy Protocol (VRRP) to determine which host, from the set of hosts, services which VIP. If a host becomes unavailable, or if the service that Keepalived is watching does not respond, the VIP is switched to another host from the set. This means a VIP is always serviced as long as a host is available. Every VIP in the set is serviced by a node selected from the set. If a single node is available, the VIPs are served. There is no way to explicitly distribute the VIPs over the nodes, so there can be nodes with no VIPs and other nodes with many VIPs. If there is only one node, all VIPs are on it. The administrator must ensure that all of the VIP addresses meet the following requirements: Accessible on the configured hosts from outside the cluster. Not used for any other purpose within the cluster. Keepalived on each node determines whether the needed service is running. If it is, VIPs are supported and Keepalived participates in the negotiation to determine which node serves the VIP. For a node to participate, the service must be listening on the watch port on a VIP or the check must be disabled. Note Each VIP in the set might be served by a different node. IP failover monitors a port on each VIP to determine whether the port is reachable on the node. If the port is not reachable, the VIP is not assigned to the node. If the port is set to 0 , this check is suppressed. The check script does the needed testing. When a node running Keepalived passes the check script, the VIP on that node can enter the master state based on its priority and the priority of the current master and as determined by the preemption strategy. A cluster administrator can provide a script through the OPENSHIFT_HA_NOTIFY_SCRIPT variable, and this script is called whenever the state of the VIP on the node changes. Keepalived uses the master state when it is servicing the VIP, the backup state when another node is servicing the VIP, or in the fault state when the check script fails. The notify script is called with the new state whenever the state changes. You can create an IP failover deployment configuration on OpenShift Container Platform. The IP failover deployment configuration specifies the set of VIP addresses, and the set of nodes on which to service them. A cluster can have multiple IP failover deployment configurations, with each managing its own set of unique VIP addresses. Each node in the IP failover configuration runs an IP failover pod, and this pod runs Keepalived. When using VIPs to access a pod with host networking, the application pod runs on all nodes that are running the IP failover pods. This enables any of the IP failover nodes to become the master and service the VIPs when needed. If application pods are not running on all nodes with IP failover, either some IP failover nodes never service the VIPs or some application pods never receive any traffic. Use the same selector and replication count, for both IP failover and the application pods, to avoid this mismatch. While using VIPs to access a service, any of the nodes can be in the IP failover set of nodes, since the service is reachable on all nodes, no matter where the application pod is running. Any of the IP failover nodes can become master at any time. The service can either use external IPs and a service port or it can use a NodePort . Setting up a NodePort is a privileged operation. When using external IPs in the service definition, the VIPs are set to the external IPs, and the IP failover monitoring port is set to the service port. When using a node port, the port is open on every node in the cluster, and the service load-balances traffic from whatever node currently services the VIP. In this case, the IP failover monitoring port is set to the NodePort in the service definition. Important Even though a service VIP is highly available, performance can still be affected. Keepalived makes sure that each of the VIPs is serviced by some node in the configuration, and several VIPs can end up on the same node even when other nodes have none. Strategies that externally load-balance across a set of VIPs can be thwarted when IP failover puts multiple VIPs on the same node. When you use ExternalIP , you can set up IP failover to have the same VIP range as the ExternalIP range. You can also disable the monitoring port. In this case, all of the VIPs appear on same node in the cluster. Any user can set up a service with an ExternalIP and make it highly available. Important There are a maximum of 254 VIPs in the cluster. 12.1. IP failover environment variables The following table contains the variables used to configure IP failover. Table 12.1. IP failover environment variables Variable Name Default Description OPENSHIFT_HA_MONITOR_PORT 80 The IP failover pod tries to open a TCP connection to this port on each Virtual IP (VIP). If connection is established, the service is considered to be running. If this port is set to 0 , the test always passes. OPENSHIFT_HA_NETWORK_INTERFACE The interface name that IP failover uses to send Virtual Router Redundancy Protocol (VRRP) traffic. The default value is eth0 . If your cluster uses the OVN-Kubernetes network plugin, set this value to br-ex to avoid packet loss. For a cluster that uses the OVN-Kubernetes network plugin, all listening interfaces do not serve VRRP but instead expect inbound traffic over a br-ex bridge. OPENSHIFT_HA_REPLICA_COUNT 2 The number of replicas to create. This must match spec.replicas value in IP failover deployment configuration. OPENSHIFT_HA_VIRTUAL_IPS The list of IP address ranges to replicate. This must be provided. For example, 1.2.3.4-6,1.2.3.9 . OPENSHIFT_HA_VRRP_ID_OFFSET 0 The offset value used to set the virtual router IDs. Using different offset values allows multiple IP failover configurations to exist within the same cluster. The default offset is 0 , and the allowed range is 0 through 255 . OPENSHIFT_HA_VIP_GROUPS The number of groups to create for VRRP. If not set, a group is created for each virtual IP range specified with the OPENSHIFT_HA_VIP_GROUPS variable. OPENSHIFT_HA_IPTABLES_CHAIN INPUT The name of the iptables chain, to automatically add an iptables rule to allow the VRRP traffic on. If the value is not set, an iptables rule is not added. If the chain does not exist, it is not created. OPENSHIFT_HA_CHECK_SCRIPT The full path name in the pod file system of a script that is periodically run to verify the application is operating. OPENSHIFT_HA_CHECK_INTERVAL 2 The period, in seconds, that the check script is run. OPENSHIFT_HA_NOTIFY_SCRIPT The full path name in the pod file system of a script that is run whenever the state changes. OPENSHIFT_HA_PREEMPTION preempt_nodelay 300 The strategy for handling a new higher priority host. The nopreempt strategy does not move master from the lower priority host to the higher priority host. 12.2. Configuring IP failover in your cluster As a cluster administrator, you can configure IP failover on an entire cluster, or on a subset of nodes, as defined by the label selector. You can also configure multiple IP failover deployments in your cluster, where each one is independent of the others. The IP failover deployment ensures that a failover pod runs on each of the nodes matching the constraints or the label used. This pod runs Keepalived, which can monitor an endpoint and use Virtual Router Redundancy Protocol (VRRP) to fail over the virtual IP (VIP) from one node to another if the first node cannot reach the service or endpoint. For production use, set a selector that selects at least two nodes, and set replicas equal to the number of selected nodes. Prerequisites You are logged in to the cluster as a user with cluster-admin privileges. You created a pull secret. Red Hat OpenStack Platform (RHOSP) only: You installed an RHOSP client (RHCOS documentation) on the target environment. You also downloaded the RHOSP openrc.sh rc file (RHCOS documentation) . Procedure Create an IP failover service account: USD oc create sa ipfailover Update security context constraints (SCC) for hostNetwork : USD oc adm policy add-scc-to-user privileged -z ipfailover USD oc adm policy add-scc-to-user hostnetwork -z ipfailover Red Hat OpenStack Platform (RHOSP) only: Complete the following steps to make a failover VIP address reachable on RHOSP ports. Use the RHOSP CLI to show the default RHOSP API and VIP addresses in the allowed_address_pairs parameter of your RHOSP cluster: USD openstack port show <cluster_name> -c allowed_address_pairs Output example *Field* *Value* allowed_address_pairs ip_address='192.168.0.5', mac_address='fa:16:3e:31:f9:cb' ip_address='192.168.0.7', mac_address='fa:16:3e:31:f9:cb' Set a different VIP address for the IP failover deployment and make the address reachable on RHOSP ports by entering the following command in the RHOSP CLI. Do not set any default RHOSP API and VIP addresses as the failover VIP address for the IP failover deployment. Example of adding the 1.1.1.1 failover IP address as an allowed address on RHOSP ports. USD openstack port set <cluster_name> --allowed-address ip-address=1.1.1.1,mac-address=fa:fa:16:3e:31:f9:cb Create a deployment YAML file to configure IP failover for your deployment. See "Example deployment YAML for IP failover configuration" in a later step. Specify the following specification in the IP failover deployment so that you pass the failover VIP address to the OPENSHIFT_HA_VIRTUAL_IPS environment variable: Example of adding the 1.1.1.1 VIP address to OPENSHIFT_HA_VIRTUAL_IPS apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived # ... spec: env: - name: OPENSHIFT_HA_VIRTUAL_IPS value: "1.1.1.1" # ... Create a deployment YAML file to configure IP failover. Note For Red Hat OpenStack Platform (RHOSP), you do not need to re-create the deployment YAML file. You already created this file as part of the earlier instructions. Example deployment YAML for IP failover configuration apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived 1 labels: ipfailover: hello-openshift spec: strategy: type: Recreate replicas: 2 selector: matchLabels: ipfailover: hello-openshift template: metadata: labels: ipfailover: hello-openshift spec: serviceAccountName: ipfailover privileged: true hostNetwork: true nodeSelector: node-role.kubernetes.io/worker: "" containers: - name: openshift-ipfailover image: registry.redhat.io/openshift4/ose-keepalived-ipfailover-rhel9:v4.17 ports: - containerPort: 63000 hostPort: 63000 imagePullPolicy: IfNotPresent securityContext: privileged: true volumeMounts: - name: lib-modules mountPath: /lib/modules readOnly: true - name: host-slash mountPath: /host readOnly: true mountPropagation: HostToContainer - name: etc-sysconfig mountPath: /etc/sysconfig readOnly: true - name: config-volume mountPath: /etc/keepalive env: - name: OPENSHIFT_HA_CONFIG_NAME value: "ipfailover" - name: OPENSHIFT_HA_VIRTUAL_IPS 2 value: "1.1.1.1-2" - name: OPENSHIFT_HA_VIP_GROUPS 3 value: "10" - name: OPENSHIFT_HA_NETWORK_INTERFACE 4 value: "ens3" #The host interface to assign the VIPs - name: OPENSHIFT_HA_MONITOR_PORT 5 value: "30060" - name: OPENSHIFT_HA_VRRP_ID_OFFSET 6 value: "0" - name: OPENSHIFT_HA_REPLICA_COUNT 7 value: "2" #Must match the number of replicas in the deployment - name: OPENSHIFT_HA_USE_UNICAST value: "false" #- name: OPENSHIFT_HA_UNICAST_PEERS #value: "10.0.148.40,10.0.160.234,10.0.199.110" - name: OPENSHIFT_HA_IPTABLES_CHAIN 8 value: "INPUT" #- name: OPENSHIFT_HA_NOTIFY_SCRIPT 9 # value: /etc/keepalive/mynotifyscript.sh - name: OPENSHIFT_HA_CHECK_SCRIPT 10 value: "/etc/keepalive/mycheckscript.sh" - name: OPENSHIFT_HA_PREEMPTION 11 value: "preempt_delay 300" - name: OPENSHIFT_HA_CHECK_INTERVAL 12 value: "2" livenessProbe: initialDelaySeconds: 10 exec: command: - pgrep - keepalived volumes: - name: lib-modules hostPath: path: /lib/modules - name: host-slash hostPath: path: / - name: etc-sysconfig hostPath: path: /etc/sysconfig # config-volume contains the check script # created with `oc create configmap keepalived-checkscript --from-file=mycheckscript.sh` - configMap: defaultMode: 0755 name: keepalived-checkscript name: config-volume imagePullSecrets: - name: openshift-pull-secret 13 1 The name of the IP failover deployment. 2 The list of IP address ranges to replicate. This must be provided. For example, 1.2.3.4-6,1.2.3.9 . 3 The number of groups to create for VRRP. If not set, a group is created for each virtual IP range specified with the OPENSHIFT_HA_VIP_GROUPS variable. 4 The interface name that IP failover uses to send VRRP traffic. By default, eth0 is used. 5 The IP failover pod tries to open a TCP connection to this port on each VIP. If connection is established, the service is considered to be running. If this port is set to 0 , the test always passes. The default value is 80 . 6 The offset value used to set the virtual router IDs. Using different offset values allows multiple IP failover configurations to exist within the same cluster. The default offset is 0 , and the allowed range is 0 through 255 . 7 The number of replicas to create. This must match spec.replicas value in IP failover deployment configuration. The default value is 2 . 8 The name of the iptables chain to automatically add an iptables rule to allow the VRRP traffic on. If the value is not set, an iptables rule is not added. If the chain does not exist, it is not created, and Keepalived operates in unicast mode. The default is INPUT . 9 The full path name in the pod file system of a script that is run whenever the state changes. 10 The full path name in the pod file system of a script that is periodically run to verify the application is operating. 11 The strategy for handling a new higher priority host. The default value is preempt_delay 300 , which causes a Keepalived instance to take over a VIP after 5 minutes if a lower-priority master is holding the VIP. 12 The period, in seconds, that the check script is run. The default value is 2 . 13 Create the pull secret before creating the deployment, otherwise you will get an error when creating the deployment. 12.3. Configuring check and notify scripts Keepalived monitors the health of the application by periodically running an optional user-supplied check script. For example, the script can test a web server by issuing a request and verifying the response. As cluster administrator, you can provide an optional notify script, which is called whenever the state changes. The check and notify scripts run in the IP failover pod and use the pod file system, not the host file system. However, the IP failover pod makes the host file system available under the /hosts mount path. When configuring a check or notify script, you must provide the full path to the script. The recommended approach for providing the scripts is to use a ConfigMap object. The full path names of the check and notify scripts are added to the Keepalived configuration file, _/etc/keepalived/keepalived.conf , which is loaded every time Keepalived starts. The scripts can be added to the pod with a ConfigMap object as described in the following methods. Check script When a check script is not provided, a simple default script is run that tests the TCP connection. This default test is suppressed when the monitor port is 0 . Each IP failover pod manages a Keepalived daemon that manages one or more virtual IP (VIP) addresses on the node where the pod is running. The Keepalived daemon keeps the state of each VIP for that node. A particular VIP on a particular node might be in master , backup , or fault state. If the check script returns non-zero, the node enters the backup state, and any VIPs it holds are reassigned. Notify script Keepalived passes the following three parameters to the notify script: USD1 - group or instance USD2 - Name of the group or instance USD3 - The new state: master , backup , or fault Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Procedure Create the desired script and create a ConfigMap object to hold it. The script has no input arguments and must return 0 for OK and 1 for fail . The check script, mycheckscript.sh : #!/bin/bash # Whatever tests are needed # E.g., send request and verify response exit 0 Create the ConfigMap object : USD oc create configmap mycustomcheck --from-file=mycheckscript.sh Add the script to the pod. The defaultMode for the mounted ConfigMap object files must able to run by using oc commands or by editing the deployment configuration. A value of 0755 , 493 decimal, is typical: USD oc set env deploy/ipfailover-keepalived \ OPENSHIFT_HA_CHECK_SCRIPT=/etc/keepalive/mycheckscript.sh USD oc set volume deploy/ipfailover-keepalived --add --overwrite \ --name=config-volume \ --mount-path=/etc/keepalive \ --source='{"configMap": { "name": "mycustomcheck", "defaultMode": 493}}' Note The oc set env command is whitespace sensitive. There must be no whitespace on either side of the = sign. Tip You can alternatively edit the ipfailover-keepalived deployment configuration: USD oc edit deploy ipfailover-keepalived spec: containers: - env: - name: OPENSHIFT_HA_CHECK_SCRIPT 1 value: /etc/keepalive/mycheckscript.sh ... volumeMounts: 2 - mountPath: /etc/keepalive name: config-volume dnsPolicy: ClusterFirst ... volumes: 3 - configMap: defaultMode: 0755 4 name: customrouter name: config-volume ... 1 In the spec.container.env field, add the OPENSHIFT_HA_CHECK_SCRIPT environment variable to point to the mounted script file. 2 Add the spec.container.volumeMounts field to create the mount point. 3 Add a new spec.volumes field to mention the config map. 4 This sets run permission on the files. When read back, it is displayed in decimal, 493 . Save the changes and exit the editor. This restarts ipfailover-keepalived . 12.4. Configuring VRRP preemption When a Virtual IP (VIP) on a node leaves the fault state by passing the check script, the VIP on the node enters the backup state if it has lower priority than the VIP on the node that is currently in the master state. The nopreempt strategy does not move master from the lower priority VIP on the host to the higher priority VIP on the host. With preempt_delay 300 , the default, Keepalived waits the specified 300 seconds and moves master to the higher priority VIP on the host. Procedure To specify preemption enter oc edit deploy ipfailover-keepalived to edit the router deployment configuration: USD oc edit deploy ipfailover-keepalived ... spec: containers: - env: - name: OPENSHIFT_HA_PREEMPTION 1 value: preempt_delay 300 ... 1 Set the OPENSHIFT_HA_PREEMPTION value: preempt_delay 300 : Keepalived waits the specified 300 seconds and moves master to the higher priority VIP on the host. This is the default value. nopreempt : does not move master from the lower priority VIP on the host to the higher priority VIP on the host. 12.5. Deploying multiple IP failover instances Each IP failover pod managed by the IP failover deployment configuration, 1 pod per node or replica, runs a Keepalived daemon. As more IP failover deployment configurations are configured, more pods are created and more daemons join into the common Virtual Router Redundancy Protocol (VRRP) negotiation. This negotiation is done by all the Keepalived daemons and it determines which nodes service which virtual IPs (VIP). Internally, Keepalived assigns a unique vrrp-id to each VIP. The negotiation uses this set of vrrp-ids , when a decision is made, the VIP corresponding to the winning vrrp-id is serviced on the winning node. Therefore, for every VIP defined in the IP failover deployment configuration, the IP failover pod must assign a corresponding vrrp-id . This is done by starting at OPENSHIFT_HA_VRRP_ID_OFFSET and sequentially assigning the vrrp-ids to the list of VIPs. The vrrp-ids can have values in the range 1..255 . When there are multiple IP failover deployment configurations, you must specify OPENSHIFT_HA_VRRP_ID_OFFSET so that there is room to increase the number of VIPs in the deployment configuration and none of the vrrp-id ranges overlap. 12.6. Configuring IP failover for more than 254 addresses IP failover management is limited to 254 groups of Virtual IP (VIP) addresses. By default OpenShift Container Platform assigns one IP address to each group. You can use the OPENSHIFT_HA_VIP_GROUPS variable to change this so multiple IP addresses are in each group and define the number of VIP groups available for each Virtual Router Redundancy Protocol (VRRP) instance when configuring IP failover. Grouping VIPs creates a wider range of allocation of VIPs per VRRP in the case of VRRP failover events, and is useful when all hosts in the cluster have access to a service locally. For example, when a service is being exposed with an ExternalIP . Note As a rule for failover, do not limit services, such as the router, to one specific host. Instead, services should be replicated to each host so that in the case of IP failover, the services do not have to be recreated on the new host. Note If you are using OpenShift Container Platform health checks, the nature of IP failover and groups means that all instances in the group are not checked. For that reason, the Kubernetes health checks must be used to ensure that services are live. Prerequisites You are logged in to the cluster with a user with cluster-admin privileges. Procedure To change the number of IP addresses assigned to each group, change the value for the OPENSHIFT_HA_VIP_GROUPS variable, for example: Example Deployment YAML for IP failover configuration ... spec: env: - name: OPENSHIFT_HA_VIP_GROUPS 1 value: "3" ... 1 If OPENSHIFT_HA_VIP_GROUPS is set to 3 in an environment with seven VIPs, it creates three groups, assigning three VIPs to the first group, and two VIPs to the two remaining groups. Note If the number of groups set by OPENSHIFT_HA_VIP_GROUPS is fewer than the number of IP addresses set to fail over, the group contains more than one IP address, and all of the addresses move as a single unit. 12.7. High availability For ExternalIP In non-cloud clusters, IP failover and ExternalIP to a service can be combined. The result is high availability services for users that create services using ExternalIP . The approach is to specify an spec.ExternalIP.autoAssignCIDRs range of the cluster network configuration, and then use the same range in creating the IP failover configuration. Because IP failover can support up to a maximum of 255 VIPs for the entire cluster, the spec.ExternalIP.autoAssignCIDRs must be /24 or smaller. Additional resources Configuration for ExternalIP Kubernetes documentation on ExternalIP 12.8. Removing IP failover When IP failover is initially configured, the worker nodes in the cluster are modified with an iptables rule that explicitly allows multicast packets on 224.0.0.18 for Keepalived. Because of the change to the nodes, removing IP failover requires running a job to remove the iptables rule and removing the virtual IP addresses used by Keepalived. Procedure Optional: Identify and delete any check and notify scripts that are stored as config maps: Identify whether any pods for IP failover use a config map as a volume: USD oc get pod -l ipfailover \ -o jsonpath="\ {range .items[?(@.spec.volumes[*].configMap)]} {'Namespace: '}{.metadata.namespace} {'Pod: '}{.metadata.name} {'Volumes that use config maps:'} {range .spec.volumes[?(@.configMap)]} {'volume: '}{.name} {'configMap: '}{.configMap.name}{'\n'}{end} {end}" Example output If the preceding step provided the names of config maps that are used as volumes, delete the config maps: USD oc delete configmap <configmap_name> Identify an existing deployment for IP failover: USD oc get deployment -l ipfailover Example output NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default ipfailover 2/2 2 2 105d Delete the deployment: USD oc delete deployment <ipfailover_deployment_name> Remove the ipfailover service account: USD oc delete sa ipfailover Run a job that removes the IP tables rule that was added when IP failover was initially configured: Create a file such as remove-ipfailover-job.yaml with contents that are similar to the following example: apiVersion: batch/v1 kind: Job metadata: generateName: remove-ipfailover- labels: app: remove-ipfailover spec: template: metadata: name: remove-ipfailover spec: containers: - name: remove-ipfailover image: registry.redhat.io/openshift4/ose-keepalived-ipfailover-rhel9:v4.17 command: ["/var/lib/ipfailover/keepalived/remove-failover.sh"] nodeSelector: 1 kubernetes.io/hostname: <host_name> 2 restartPolicy: Never 1 The nodeSelector is likely the same as the selector used in the old IP failover deployment. 2 Run the job for each node in your cluster that was configured for IP failover and replace the hostname each time. Run the job: USD oc create -f remove-ipfailover-job.yaml Example output Verification Confirm that the job removed the initial configuration for IP failover. USD oc logs job/remove-ipfailover-2h8dm Example output remove-failover.sh: OpenShift IP Failover service terminating. - Removing ip_vs module ... - Cleaning up ... - Releasing VIPs (interface eth0) ...
[ "oc create sa ipfailover", "oc adm policy add-scc-to-user privileged -z ipfailover", "oc adm policy add-scc-to-user hostnetwork -z ipfailover", "openstack port show <cluster_name> -c allowed_address_pairs", "*Field* *Value* allowed_address_pairs ip_address='192.168.0.5', mac_address='fa:16:3e:31:f9:cb' ip_address='192.168.0.7', mac_address='fa:16:3e:31:f9:cb'", "openstack port set <cluster_name> --allowed-address ip-address=1.1.1.1,mac-address=fa:fa:16:3e:31:f9:cb", "apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived spec: env: - name: OPENSHIFT_HA_VIRTUAL_IPS value: \"1.1.1.1\"", "apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived 1 labels: ipfailover: hello-openshift spec: strategy: type: Recreate replicas: 2 selector: matchLabels: ipfailover: hello-openshift template: metadata: labels: ipfailover: hello-openshift spec: serviceAccountName: ipfailover privileged: true hostNetwork: true nodeSelector: node-role.kubernetes.io/worker: \"\" containers: - name: openshift-ipfailover image: registry.redhat.io/openshift4/ose-keepalived-ipfailover-rhel9:v4.17 ports: - containerPort: 63000 hostPort: 63000 imagePullPolicy: IfNotPresent securityContext: privileged: true volumeMounts: - name: lib-modules mountPath: /lib/modules readOnly: true - name: host-slash mountPath: /host readOnly: true mountPropagation: HostToContainer - name: etc-sysconfig mountPath: /etc/sysconfig readOnly: true - name: config-volume mountPath: /etc/keepalive env: - name: OPENSHIFT_HA_CONFIG_NAME value: \"ipfailover\" - name: OPENSHIFT_HA_VIRTUAL_IPS 2 value: \"1.1.1.1-2\" - name: OPENSHIFT_HA_VIP_GROUPS 3 value: \"10\" - name: OPENSHIFT_HA_NETWORK_INTERFACE 4 value: \"ens3\" #The host interface to assign the VIPs - name: OPENSHIFT_HA_MONITOR_PORT 5 value: \"30060\" - name: OPENSHIFT_HA_VRRP_ID_OFFSET 6 value: \"0\" - name: OPENSHIFT_HA_REPLICA_COUNT 7 value: \"2\" #Must match the number of replicas in the deployment - name: OPENSHIFT_HA_USE_UNICAST value: \"false\" #- name: OPENSHIFT_HA_UNICAST_PEERS #value: \"10.0.148.40,10.0.160.234,10.0.199.110\" - name: OPENSHIFT_HA_IPTABLES_CHAIN 8 value: \"INPUT\" #- name: OPENSHIFT_HA_NOTIFY_SCRIPT 9 # value: /etc/keepalive/mynotifyscript.sh - name: OPENSHIFT_HA_CHECK_SCRIPT 10 value: \"/etc/keepalive/mycheckscript.sh\" - name: OPENSHIFT_HA_PREEMPTION 11 value: \"preempt_delay 300\" - name: OPENSHIFT_HA_CHECK_INTERVAL 12 value: \"2\" livenessProbe: initialDelaySeconds: 10 exec: command: - pgrep - keepalived volumes: - name: lib-modules hostPath: path: /lib/modules - name: host-slash hostPath: path: / - name: etc-sysconfig hostPath: path: /etc/sysconfig # config-volume contains the check script # created with `oc create configmap keepalived-checkscript --from-file=mycheckscript.sh` - configMap: defaultMode: 0755 name: keepalived-checkscript name: config-volume imagePullSecrets: - name: openshift-pull-secret 13", "#!/bin/bash # Whatever tests are needed # E.g., send request and verify response exit 0", "oc create configmap mycustomcheck --from-file=mycheckscript.sh", "oc set env deploy/ipfailover-keepalived OPENSHIFT_HA_CHECK_SCRIPT=/etc/keepalive/mycheckscript.sh", "oc set volume deploy/ipfailover-keepalived --add --overwrite --name=config-volume --mount-path=/etc/keepalive --source='{\"configMap\": { \"name\": \"mycustomcheck\", \"defaultMode\": 493}}'", "oc edit deploy ipfailover-keepalived", "spec: containers: - env: - name: OPENSHIFT_HA_CHECK_SCRIPT 1 value: /etc/keepalive/mycheckscript.sh volumeMounts: 2 - mountPath: /etc/keepalive name: config-volume dnsPolicy: ClusterFirst volumes: 3 - configMap: defaultMode: 0755 4 name: customrouter name: config-volume", "oc edit deploy ipfailover-keepalived", "spec: containers: - env: - name: OPENSHIFT_HA_PREEMPTION 1 value: preempt_delay 300", "spec: env: - name: OPENSHIFT_HA_VIP_GROUPS 1 value: \"3\"", "oc get pod -l ipfailover -o jsonpath=\" {range .items[?(@.spec.volumes[*].configMap)]} {'Namespace: '}{.metadata.namespace} {'Pod: '}{.metadata.name} {'Volumes that use config maps:'} {range .spec.volumes[?(@.configMap)]} {'volume: '}{.name} {'configMap: '}{.configMap.name}{'\\n'}{end} {end}\"", "Namespace: default Pod: keepalived-worker-59df45db9c-2x9mn Volumes that use config maps: volume: config-volume configMap: mycustomcheck", "oc delete configmap <configmap_name>", "oc get deployment -l ipfailover", "NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default ipfailover 2/2 2 2 105d", "oc delete deployment <ipfailover_deployment_name>", "oc delete sa ipfailover", "apiVersion: batch/v1 kind: Job metadata: generateName: remove-ipfailover- labels: app: remove-ipfailover spec: template: metadata: name: remove-ipfailover spec: containers: - name: remove-ipfailover image: registry.redhat.io/openshift4/ose-keepalived-ipfailover-rhel9:v4.17 command: [\"/var/lib/ipfailover/keepalived/remove-failover.sh\"] nodeSelector: 1 kubernetes.io/hostname: <host_name> 2 restartPolicy: Never", "oc create -f remove-ipfailover-job.yaml", "job.batch/remove-ipfailover-2h8dm created", "oc logs job/remove-ipfailover-2h8dm", "remove-failover.sh: OpenShift IP Failover service terminating. - Removing ip_vs module - Cleaning up - Releasing VIPs (interface eth0)" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/networking/configuring-ipfailover
Chapter 3. Verifying OpenShift Data Foundation deployment
Chapter 3. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 3.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) ux-backend-server- * (1 pod on any storage node) * ocs-client-operator -* (1 pod on any storage node) ocs-client-operator-console -* (1 pod on any storage node) ocs-provider-server -* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 3.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 3.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_microsoft_azure/verifying_openshift_data_foundation_deployment
6.13. RHEA-2013:0289 - new package: mtdev
6.13. RHEA-2013:0289 - new package: mtdev A new mtdev package is now available for Red Hat Enterprise Linux 6. The new mtdev package contains a library that converts kernel input events from multitouch protocol A into multitouch protocol B events. Protocol B events provide per-touchpoint tracking which is required by the xorg-x11-drv-evdev and xorg-x11-drv-synaptics packages. This enhancement update adds the mtdev package to Red Hat Enterprise Linux 6. (BZ# 860177 ) All users who require mtdev should install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/rhea-2013-0289
Chapter 12. Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates
Chapter 12. Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates In OpenShift Container Platform version 4.14, you can install a cluster on Amazon Web Services (AWS) that uses infrastructure that you provide. One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 12.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . 12.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 12.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 12.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 12.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 12.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 12.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 12.3.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 12.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 12.3.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 12.2. Machine types based on 64-bit ARM architecture c6g.* m6g.* r8g.* 12.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 12.4. Required AWS infrastructure components To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure. For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page. By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components: An AWS Virtual Private Cloud (VPC) Networking and load balancing components Security groups and roles An OpenShift Container Platform bootstrap node OpenShift Container Platform control plane nodes An OpenShift Container Platform compute node Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate. 12.4.1. Other infrastructure components A VPC DNS entries Load balancers (classic or network) and listeners A public and a private Route 53 zone Security groups IAM roles S3 buckets If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. Required DNS and load balancing components Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster's infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer. The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the control plane nodes. Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster. Component AWS type Description DNS AWS::Route53::HostedZone The hosted zone for your internal DNS. Public load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your public subnets. External API server record AWS::Route53::RecordSetGroup Alias records for the external API server. External listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the external load balancer. External target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the external load balancer. Private load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your private subnets. Internal API server record AWS::Route53::RecordSetGroup Alias records for the internal API server. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 22623 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Security groups The control plane and worker machines require access to the following ports: Group Type IP Protocol Port range MasterSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 tcp 6443 tcp 22623 WorkerSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 BootstrapSecurityGroup AWS::EC2::SecurityGroup tcp 22 tcp 19531 Control plane Ingress The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range MasterIngressEtcd etcd tcp 2379 - 2380 MasterIngressVxlan Vxlan packets udp 4789 MasterIngressWorkerVxlan Vxlan packets udp 4789 MasterIngressInternal Internal cluster communication and Kubernetes proxy metrics tcp 9000 - 9999 MasterIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 MasterIngressKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressWorkerKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressGeneve Geneve packets udp 6081 MasterIngressWorkerGeneve Geneve packets udp 6081 MasterIngressIpsecIke IPsec IKE packets udp 500 MasterIngressWorkerIpsecIke IPsec IKE packets udp 500 MasterIngressIpsecNat IPsec NAT-T packets udp 4500 MasterIngressWorkerIpsecNat IPsec NAT-T packets udp 4500 MasterIngressIpsecEsp IPsec ESP packets 50 All MasterIngressWorkerIpsecEsp IPsec ESP packets 50 All MasterIngressInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressWorkerInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 MasterIngressWorkerIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Worker Ingress The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range WorkerIngressVxlan Vxlan packets udp 4789 WorkerIngressWorkerVxlan Vxlan packets udp 4789 WorkerIngressInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressWorkerKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressGeneve Geneve packets udp 6081 WorkerIngressMasterGeneve Geneve packets udp 6081 WorkerIngressIpsecIke IPsec IKE packets udp 500 WorkerIngressMasterIpsecIke IPsec IKE packets udp 500 WorkerIngressIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressMasterIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressIpsecEsp IPsec ESP packets 50 All WorkerIngressMasterIpsecEsp IPsec ESP packets 50 All WorkerIngressInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressMasterInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 WorkerIngressMasterIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Roles and instance profiles You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines Allow permissions for the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions. Role Effect Action Resource Master Allow ec2:* * Allow elasticloadbalancing:* * Allow iam:PassRole * Allow s3:GetObject * Worker Allow ec2:Describe* * Bootstrap Allow ec2:Describe* * Allow ec2:AttachVolume * Allow ec2:DetachVolume * 12.4.2. Cluster machines You need AWS::EC2::Instance objects for the following machines: A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys. Three control plane machines. The control plane machines are not governed by a control plane machine set. Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a compute machine set. 12.4.3. Required AWS permissions for the IAM user Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions: Example 12.3. Required EC2 permissions for installation ec2:AttachNetworkInterface ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CopyImage ec2:CreateNetworkInterface ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteSnapshot ec2:DeleteTags ec2:DeregisterImage ec2:DescribeAccountAttributes ec2:DescribeAddresses ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstanceAttribute ec2:DescribeInstanceCreditSpecifications ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeKeyPairs ec2:DescribeNatGateways ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribePrefixLists ec2:DescribeRegions ec2:DescribeRouteTables ec2:DescribeSecurityGroupRules ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcEndpoints ec2:DescribeVpcs ec2:GetEbsDefaultKmsKeyId ec2:ModifyInstanceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RevokeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RunInstances ec2:TerminateInstances Example 12.4. Required permissions for creating network resources during installation ec2:AllocateAddress ec2:AssociateAddress ec2:AssociateDhcpOptions ec2:AssociateRouteTable ec2:AttachInternetGateway ec2:CreateDhcpOptions ec2:CreateInternetGateway ec2:CreateNatGateway ec2:CreateRoute ec2:CreateRouteTable ec2:CreateSubnet ec2:CreateVpc ec2:CreateVpcEndpoint ec2:ModifySubnetAttribute ec2:ModifyVpcAttribute Note If you use an existing Virtual Private Cloud (VPC), your account does not require these permissions for creating network resources. Example 12.5. Required Elastic Load Balancing permissions (ELB) for installation elasticloadbalancing:AddTags elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:DescribeInstanceHealth elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:DescribeTargetGroupAttributes elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:SetLoadBalancerPoliciesOfListener Important OpenShift Container Platform uses both the ELB and ELBv2 API services to provision load balancers. The permission list shows permissions required by both services. A known issue exists in the AWS web console where both services use the same elasticloadbalancing action prefix but do not recognize the same actions. You can ignore the warnings about the service not recognizing certain elasticloadbalancing actions. Example 12.6. Required IAM permissions for installation iam:AddRoleToInstanceProfile iam:CreateInstanceProfile iam:CreateRole iam:DeleteInstanceProfile iam:DeleteRole iam:DeleteRolePolicy iam:GetInstanceProfile iam:GetRole iam:GetRolePolicy iam:GetUser iam:ListInstanceProfilesForRole iam:ListRoles iam:ListUsers iam:PassRole iam:PutRolePolicy iam:RemoveRoleFromInstanceProfile iam:SimulatePrincipalPolicy iam:TagRole Note If you have not created a load balancer in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission. Example 12.7. Required Route 53 permissions for installation route53:ChangeResourceRecordSets route53:ChangeTagsForResource route53:CreateHostedZone route53:DeleteHostedZone route53:GetChange route53:GetHostedZone route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ListTagsForResource route53:UpdateHostedZoneComment Example 12.8. Required Amazon Simple Storage Service (S3) permissions for installation s3:CreateBucket s3:DeleteBucket s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLocation s3:GetBucketLogging s3:GetBucketObjectLockConfiguration s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetReplicationConfiguration s3:ListBucket s3:PutBucketAcl s3:PutBucketTagging s3:PutEncryptionConfiguration Example 12.9. S3 permissions that cluster Operators require s3:DeleteObject s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:GetObjectVersion s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Example 12.10. Required permissions to delete base cluster resources autoscaling:DescribeAutoScalingGroups ec2:DeleteNetworkInterface ec2:DeletePlacementGroup ec2:DeleteVolume elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeTargetGroups iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:ListAttachedRolePolicies iam:ListInstanceProfiles iam:ListRolePolicies iam:ListUserPolicies s3:DeleteObject s3:ListBucketVersions tag:GetResources Example 12.11. Required permissions to delete network resources ec2:DeleteDhcpOptions ec2:DeleteInternetGateway ec2:DeleteNatGateway ec2:DeleteRoute ec2:DeleteRouteTable ec2:DeleteSubnet ec2:DeleteVpc ec2:DeleteVpcEndpoints ec2:DetachInternetGateway ec2:DisassociateRouteTable ec2:ReleaseAddress ec2:ReplaceRouteTableAssociation Note If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources. Example 12.12. Optional permissions for installing a cluster with a custom Key Management Service (KMS) key kms:CreateGrant kms:Decrypt kms:DescribeKey kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:ListGrants kms:RevokeGrant Example 12.13. Required permissions to delete a cluster with shared instance roles iam:UntagRole Example 12.14. Additional IAM and S3 permissions that are required to create manifests iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser s3:AbortMultipartUpload s3:GetBucketPublicAccessBlock s3:ListBucket s3:ListBucketMultipartUploads s3:PutBucketPublicAccessBlock s3:PutLifecycleConfiguration Note If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions. Example 12.15. Optional permissions for instance and quota checks for installation ec2:DescribeInstanceTypeOfferings servicequotas:ListAWSDefaultServiceQuotas Example 12.16. Optional permissions for the cluster owner account when installing a cluster on a shared VPC sts:AssumeRole 12.5. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy worker nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific region. If you use the CloudFormation template to deploy your worker nodes, you must update the worker0.type.properties.ImageID parameter with this value. 12.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 12.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 12.8. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 12.8.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 12.8.2. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. You checked that you are deploying your cluster to a region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to a region that requires a custom AMI, such as an AWS GovCloud region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . If you are installing a three-node cluster, modify the install-config.yaml file by setting the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on AWS". Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 12.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.8.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 12.9. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 12.10. Creating a VPC in AWS You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. 12.10.1. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 12.17. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 12.11. Creating networking and load balancing components in AWS You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags. You can run the template multiple times within a single Virtual Private Cloud (VPC). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command: USD aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1 1 For the <route53_domain> , specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Example output mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10 In the example output, the hosted zone ID is Z21IXYZABCZ2A4 . Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 5 The Route 53 public zone ID to register the targets with. 6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4 . You can obtain this value from the AWS console. 7 The Route 53 zone to register the targets with. 8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 9 The public subnets that you created for your VPC. 10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 11 The private subnets that you created for your VPC. 12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 13 The VPC that you created for the cluster. 14 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires. Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-dns . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PrivateHostedZoneId Hosted zone ID for the private DNS. ExternalApiLoadBalancerName Full name of the external API load balancer. InternalApiLoadBalancerName Full name of the internal API load balancer. ApiServerDnsName Full hostname of the API server. RegisterNlbIpTargetsLambda Lambda ARN useful to help register/deregister IP targets for these load balancers. ExternalApiTargetGroupArn ARN of external API target group. InternalApiTargetGroupArn ARN of internal API target group. InternalServiceTargetGroupArn ARN of internal service target group. 12.11.1. CloudFormation template for the network and load balancers You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster. Example 12.18. CloudFormation template for the network and load balancers AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/healthz" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.8" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.8" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example: Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . You can view details about your hosted zones by navigating to the AWS Route 53 console . See Listing public hosted zones in the AWS documentation for more information about listing public hosted zones. 12.12. Creating security group and roles in AWS You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 The CIDR block for the VPC. 4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24 . 5 The private subnets that you created for your VPC. 6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 7 The VPC that you created for the cluster. 8 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-sec . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: MasterSecurityGroupId Master Security Group ID WorkerSecurityGroupId Worker Security Group ID MasterInstanceProfile Master IAM Instance Profile WorkerInstanceProfile Worker IAM Instance Profile 12.12.1. CloudFormation template for security objects You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster. Example 12.19. CloudFormation template for security objects AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:AttachVolume" - "ec2:AuthorizeSecurityGroupIngress" - "ec2:CreateSecurityGroup" - "ec2:CreateTags" - "ec2:CreateVolume" - "ec2:DeleteSecurityGroup" - "ec2:DeleteVolume" - "ec2:Describe*" - "ec2:DetachVolume" - "ec2:ModifyInstanceAttribute" - "ec2:ModifyVolume" - "ec2:RevokeSecurityGroupIngress" - "elasticloadbalancing:AddTags" - "elasticloadbalancing:AttachLoadBalancerToSubnets" - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer" - "elasticloadbalancing:CreateListener" - "elasticloadbalancing:CreateLoadBalancer" - "elasticloadbalancing:CreateLoadBalancerPolicy" - "elasticloadbalancing:CreateLoadBalancerListeners" - "elasticloadbalancing:CreateTargetGroup" - "elasticloadbalancing:ConfigureHealthCheck" - "elasticloadbalancing:DeleteListener" - "elasticloadbalancing:DeleteLoadBalancer" - "elasticloadbalancing:DeleteLoadBalancerListeners" - "elasticloadbalancing:DeleteTargetGroup" - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer" - "elasticloadbalancing:DeregisterTargets" - "elasticloadbalancing:Describe*" - "elasticloadbalancing:DetachLoadBalancerFromSubnets" - "elasticloadbalancing:ModifyListener" - "elasticloadbalancing:ModifyLoadBalancerAttributes" - "elasticloadbalancing:ModifyTargetGroup" - "elasticloadbalancing:ModifyTargetGroupAttributes" - "elasticloadbalancing:RegisterInstancesWithLoadBalancer" - "elasticloadbalancing:RegisterTargets" - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer" - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" - "kms:DescribeKey" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:DescribeInstances" - "ec2:DescribeRegions" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 12.13. Accessing RHCOS AMIs with stream metadata In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation. You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format. For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI. Procedure To parse the stream metadata, use one of the following methods: From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go . You can also view example code in the library. From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language. From a command-line utility that handles JSON data, such as jq : Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1 : For x86_64 USD openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image' Example output ami-0d3e625f84626bbda For aarch64 USD openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image' Example output ami-0af1d3b7fa5be2131 The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster. 12.14. RHCOS AMIs for the AWS infrastructure Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes. Note By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI. Table 12.3. x86_64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-01860370941726bdd ap-east-1 ami-05bc702cdaf7e4251 ap-northeast-1 ami-098932fd93c15690d ap-northeast-2 ami-006f4e02d97910a36 ap-northeast-3 ami-0c4bd5b1724f82273 ap-south-1 ami-0cbf22b638724853d ap-south-2 ami-031f4d165f4b429c4 ap-southeast-1 ami-0dc3e381a731ab9fc ap-southeast-2 ami-032ae8d0f287a66a6 ap-southeast-3 ami-0393130e034b86423 ap-southeast-4 ami-0b38f776bded7d7d7 ca-central-1 ami-058ea81b3a1d17edd eu-central-1 ami-011010debd974a250 eu-central-2 ami-0623b105ae811a5e2 eu-north-1 ami-0c4bb9ce04f3526d4 eu-south-1 ami-06c29eccd3d74df52 eu-south-2 ami-00e0b5f3181a3f98b eu-west-1 ami-087bfa513dc600676 eu-west-2 ami-0ebad59c0e9554473 eu-west-3 ami-074e63b65eaf83f96 me-central-1 ami-0179d6ae1d908ace9 me-south-1 ami-0b60c75273d3efcd7 sa-east-1 ami-0913cbfbfa9a7a53c us-east-1 ami-0f71dcd99e6a1cd53 us-east-2 ami-0545fae7edbbbf061 us-gov-east-1 ami-081eabdc478e501e5 us-gov-west-1 ami-076102c394767f319 us-west-1 ami-0609e4436c4ae5eff us-west-2 ami-0c5d3e03c0ab9b19a Table 12.4. aarch64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-08dd66a61a2caa326 ap-east-1 ami-0232cd715f8168c34 ap-northeast-1 ami-0bc0b17618da96700 ap-northeast-2 ami-0ee505fb62eed2fd6 ap-northeast-3 ami-0462cd2c3b7044c77 ap-south-1 ami-0e0b4d951b43adc58 ap-south-2 ami-06d457b151cc0e407 ap-southeast-1 ami-0874e1640dfc15f17 ap-southeast-2 ami-05f215734ceb0f5ad ap-southeast-3 ami-073030df265c88b3f ap-southeast-4 ami-043f4c40a6fc3238a ca-central-1 ami-0840622f99a32f586 eu-central-1 ami-09a5e6ebe667ae6b5 eu-central-2 ami-0835cb1bf387e609a eu-north-1 ami-069ddbda521a10a27 eu-south-1 ami-09c5cc21026032b4c eu-south-2 ami-0c36ab2a8bbeed045 eu-west-1 ami-0d2723c8228cb2df3 eu-west-2 ami-0abd66103d069f9a8 eu-west-3 ami-08c7249d59239fc5c me-central-1 ami-0685f33ebb18445a2 me-south-1 ami-0466941f4e5c56fe6 sa-east-1 ami-08cdc0c8a972f4763 us-east-1 ami-0d461970173c4332d us-east-2 ami-0e9cdc0e85e0a6aeb us-gov-east-1 ami-0b896df727672ce09 us-gov-west-1 ami-0b896df727672ce09 us-west-1 ami-027b7cc5f4c74e6c1 us-west-2 ami-0b189d89b44bdfbf2 12.14.1. AWS regions without a published RHCOS AMI You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) or the AWS software development kit (SDK). If a published AMI is not available for an AWS region, you can upload a custom AMI prior to installing the cluster. If you are deploying to a region not supported by the AWS SDK and you do not specify a custom AMI, the installation program copies the us-east-1 AMI to the user account automatically. Then the installation program creates the control plane machines with encrypted EBS volumes using the default or user-specified Key Management Service (KMS) key. This allows the AMI to follow the same process workflow as published RHCOS AMIs. A region without native support for an RHCOS AMI is not available to select from the terminal during cluster creation because it is not published. However, you can install to this region by configuring the custom AMI in the install-config.yaml file. 12.14.2. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 1 1 The RHCOS VMDK version, like 4.14.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 12.15. Creating the bootstrap node in AWS You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by: Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires. Note If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. Procedure Create the bucket by running the following command: USD aws s3 mb s3://<cluster-name>-infra 1 1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster. You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. Providing your own custom endpoints. Upload the bootstrap.ign Ignition config file to the bucket by running the following command: USD aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify that the file uploaded by running the following command: USD aws s3 ls s3://<cluster-name>-infra/ Example output 2019-04-03 16:15:16 314878 bootstrap.ign Note The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach. Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture. 4 Specify a valid AWS::EC2::Image::Id value. 5 CIDR block to allow SSH access to the bootstrap node. 6 Specify a CIDR block in the format x.x.x.x/16-24 . 7 The public subnet that is associated with your VPC to launch the bootstrap node into. 8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 9 The master security group ID (for registering temporary rules) 10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 11 The VPC created resources will belong to. 12 Specify the VpcId value from the output of the CloudFormation template for the VPC. 13 Location to fetch bootstrap Ignition config file from. 14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign . 15 Whether or not to register a network load balancer (NLB). 16 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 17 The ARN for NLB IP target registration lambda group. 18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 19 The ARN for external API load balancer target group. 20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 21 The ARN for internal API load balancer target group. 22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 23 The ARN for internal service load balancer target group. 24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: BootstrapInstanceId The bootstrap Instance ID. BootstrapPublicIp The bootstrap node public IP address. BootstrapPrivateIp The bootstrap node private IP address. 12.15.1. CloudFormation template for the bootstrap machine You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster. Example 12.20. CloudFormation template for the bootstrap machine AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: "i3.large" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"USD{S3Loc}"}},"version":"3.1.0"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise Linux CoreOS (RHCOS) AMIs for the AWS zones. 12.16. Creating the control plane machines in AWS You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes. Important The CloudFormation template creates a stack that represents three control plane nodes. Note If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 Whether or not to perform DNS etcd registration. 6 Specify yes or no . If you specify yes , you must provide hosted zone information. 7 The Route 53 private zone ID to register the etcd targets with. 8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing. 9 The Route 53 zone to register the targets with. 10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 11 13 15 A subnet, preferably private, to launch the control plane machines on. 12 14 16 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 17 The master security group ID to associate with control plane nodes. 18 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 19 The location to fetch control plane Ignition config file from. 20 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master . 21 The base64 encoded certificate authority string to use. 22 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 23 The IAM profile to associate with control plane nodes. 24 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 25 The type of AWS instance to use for the control plane machines based on your selected architecture. 26 The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64 and m6g.xlarge is a type for ARM64. 27 Whether or not to register a network load balancer (NLB). 28 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 29 The ARN for NLB IP target registration lambda group. 30 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 31 The ARN for external API load balancer target group. 32 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 33 The ARN for internal API load balancer target group. 34 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 35 The ARN for internal service load balancer target group. 36 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires. If you specified an m5 instance type as the value for MasterInstanceType , add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-control-plane . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b Note The CloudFormation template creates a stack that represents three control plane nodes. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> 12.16.1. CloudFormation template for control plane machines You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster. Example 12.21. CloudFormation template for control plane machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "" Description: unused Type: String PrivateHostedZoneId: Default: "" Description: unused Type: String PrivateHostedZoneName: Default: "" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 12.17. Creating the worker nodes in AWS You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. Note If you are installing a three-node cluster, skip this step. A three-node cluster consists of three control plane machines, which also act as compute machines. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node. Important The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node. Note If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "" 16 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 A subnet, preferably private, to start the worker nodes on. 6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 7 The worker security group ID to associate with worker nodes. 8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 9 The location to fetch the bootstrap Ignition config file from. 10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker . 11 Base64 encoded certificate authority string to use. 12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 13 The IAM profile to associate with worker nodes. 14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 15 The type of AWS instance to use for the compute machines based on your selected architecture. 16 The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64 and m6g.large is a type for ARM64. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires. Optional: If you specified an m5 instance type as the value for WorkerInstanceType , add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription. Use the CloudFormation template to create a stack of AWS resources that represent a worker node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-worker-1 . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59 Note The CloudFormation template creates a stack that represents one worker node. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name. Important You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template. 12.17.1. CloudFormation template for worker machines You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster. Example 12.22. CloudFormation template for worker machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 12.18. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. You created the worker nodes. Procedure Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized. Note After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators. Additional resources See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process. You can view details about the running instances that are created by using the AWS EC2 console . 12.19. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 12.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 12.22. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Configure the Operators that are not available. 12.22.1. Image registry storage configuration Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShift Container Platform to hidden regions. See Configuring the registry for AWS user-provisioned infrastructure for more information. 12.22.1.1. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 12.22.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 12.23. Deleting the bootstrap resources After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). Prerequisites You completed the initial Operator configuration for your cluster. Procedure Delete the bootstrap resources. If you used the CloudFormation template, delete its stack : Delete the stack by using the AWS CLI: USD aws cloudformation delete-stack --stack-name <name> 1 1 <name> is the name of your bootstrap stack. Delete the stack by using the AWS CloudFormation console . 12.24. Creating the Ingress DNS Records If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. Prerequisites You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned. You installed the OpenShift CLI ( oc ). You installed the jq package. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) . Procedure Determine the routes to create. To create a wildcard record, use *.apps.<cluster_name>.<domain_name> , where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name> Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m Locate the hosted zone ID for the load balancer: USD aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 1 1 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer that you obtained. Example output Z3AADJGX6KTTL2 The output of this command is the load balancer hosted zone ID. Obtain the public hosted zone ID for your cluster's domain: USD aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text 1 2 For <domain_name> , specify the Route 53 base domain for your OpenShift Container Platform cluster. Example output /hostedzone/Z3URY6TWQ91KVV The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV . Add the alias records to your private zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <private_hosted_zone_id> , specify the value from the output of the CloudFormation template for DNS and load balancing. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. Add the records to your public zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <public_hosted_zone_id> , specify the public hosted zone for your domain. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. 12.25. Completing an AWS installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion. Prerequisites You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure. You installed the oc CLI. Procedure From the directory that contains the installation program, complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. /validating-an-installation.adoc 12.26. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 12.27. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 12.28. Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. 12.29. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable", "aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1", "mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10", "[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.8\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.8\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup", "Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile", "openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'", "ami-0d3e625f84626bbda", "openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'", "ami-0af1d3b7fa5be2131", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "aws s3 mb s3://<cluster-name>-infra 1", "aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1", "aws s3 ls s3://<cluster-name>-infra/", "2019-04-03 16:15:16 314878 bootstrap.ign", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "aws cloudformation delete-stack --stack-name <name> 1", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m", "aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1", "Z3AADJGX6KTTL2", "aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text", "/hostedzone/Z3URY6TWQ91KVV", "aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_aws/installing-aws-user-infra
Chapter 2. Horizontal Scaling in Red Hat Ansible Automation Platform
Chapter 2. Horizontal Scaling in Red Hat Ansible Automation Platform You can set up multi-node deployments for components across Ansible Automation Platform. Whether you require horizontal scaling for Automation Execution, Automation Decisions, or automation mesh, you can scale your deployments based on your organization's needs. 2.1. Horizontal scaling in Event-Driven Ansible controller With Event-Driven Ansible controller, you can set up horizontal scaling for your events automation. This multi-node deployment enables you to define as many nodes as you prefer during the installation process. You can also increase or decrease the number of nodes at any time according to your organizational needs. The following node types are used in this deployment: API node type Responds to the HTTP REST API of Event-Driven Ansible controller. Worker node type Runs an Event-Driven Ansible worker, which is the component of Event-Driven Ansible that not only manages projects and activations, but also executes the activations themselves. Hybrid node type Is a combination of the API node and the worker node. The following example shows how you can set up an inventory file for horizontal scaling of Event-Driven Ansible controller on Red Hat Enterprise Linux VMs using the host group name [automationeda] and the node type variable eda_type : [automationeda] 3.88.116.111 routable_hostname=automationeda-api.example.com eda_type=api # worker node 3.88.116.112 routable_hostname=automationeda-api.example.com eda_type=worker 2.1.1. Sizing and scaling guidelines API nodes process user requests (interactions with the UI or API) while worker nodes process the activations and other background tasks required for Event-Driven Ansible to function properly. The number of API nodes you require correlates to the desired number of users of the application and the number of worker nodes correlates to the desired number of activations you want to run. Since activations are variable and controlled by worker nodes, the supported approach for scaling is to use separate API and worker nodes instead of hybrid nodes due to the efficient allocation of hardware resources by worker nodes. By separating the nodes, you can scale each type independently based on specific needs, leading to better resource utilization and cost efficiency. An example of an instance in which you might consider scaling up your node deployment is when you want to deploy Event-Driven Ansible for a small group of users who will run a large number of activations. In this case, one API node is adequate, but if you require more, you can scale up to three additional worker nodes. To set up a multi-node deployment, follow the procedure in Setting up horizontal scaling for Event-Driven Ansible controller . 2.1.2. Setting up horizontal scaling for Event-Driven Ansible controller To scale up (add more nodes) or scale down (remove nodes), you must update the content of the inventory file to add or remove nodes and rerun the installation program. Procedure Update the inventory to add two more worker nodes: [automationeda] 3.88.116.111 routable_hostname=automationeda-api.example.com eda_type=api 3.88.116.112 routable_hostname=automationeda-api.example.com eda_type=worker # two more worker nodes 3.88.116.113 routable_hostname=automationeda-api.example.com eda_type=worker 3.88.116.114 routable_hostname=automationeda-api.example.com eda_type=worker Re-run the installer.
[ "[automationeda] 3.88.116.111 routable_hostname=automationeda-api.example.com eda_type=api worker node 3.88.116.112 routable_hostname=automationeda-api.example.com eda_type=worker", "[automationeda] 3.88.116.111 routable_hostname=automationeda-api.example.com eda_type=api 3.88.116.112 routable_hostname=automationeda-api.example.com eda_type=worker two more worker nodes 3.88.116.113 routable_hostname=automationeda-api.example.com eda_type=worker 3.88.116.114 routable_hostname=automationeda-api.example.com eda_type=worker" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/assembly-horizontal-scaling
Chapter 2. Installing and configuring Ceph for OpenStack
Chapter 2. Installing and configuring Ceph for OpenStack As a storage administrator, you must install and configure Ceph before the Red Hat OpenStack Platform can use the Ceph block devices. Prerequisites A new or existing Red Hat Ceph Storage cluster. 2.1. Creating Ceph pools for OpenStack You can create Ceph pools for use with OpenStack. By default, Ceph block devices use the rbd pool, but you can use any available pool. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Verify the Red Hat Ceph Storage cluster is running, and is in a HEALTH_OK state: Create the Ceph pools: Example In the above example, 128 is the number of placement groups. Important Red Hat recommends using the Ceph Placement Group's per Pool Calculator to calculate a suitable number of placement groups for the pools. Additional Resources See the Pools chapter in the Storage Strategies guide for more details on creating pools. 2.2. Installing the Ceph client on OpenStack You can install the Ceph client packages on the Red Hat OpenStack Platform to access the Ceph storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. Root-level access to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes. Procedure On the OpenStack Nova, Cinder, Cinder Backup nodes install the following packages: On the OpenStack Glance host install the python-rbd package: 2.3. Copying the Ceph configuration file to OpenStack Copying the Ceph configuration file to the nova-compute , cinder-backup , cinder-volume , and glance-api nodes. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. Root-level access to the OpenStack Nova, Cinder, and Glance nodes. Procedure Copy the Ceph configuration file from the Ceph Monitor host to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes: 2.4. Configuring Ceph client authentication You can configure authentication for the Ceph client to access the Red Hat OpenStack Platform. Prerequisites Root-level access to the Ceph Monitor host. A running Red Hat Ceph Storage cluster. Procedure From a Ceph Monitor host, create new users for Cinder, Cinder Backup and Glance: Add the keyrings for client.cinder , client.cinder-backup and client.glance to the appropriate nodes and change their ownership: OpenStack Nova nodes need the keyring file for the nova-compute process: The OpenStack Nova nodes also need to store the secret key of the client.cinder user in libvirt . The libvirt process needs the secret key to access the cluster while attaching a block device from Cinder. Create a temporary copy of the secret key on the OpenStack Nova nodes: If the storage cluster contains Ceph block device images that use the exclusive-lock feature, ensure that all Ceph block device users have permissions to blocklist clients: Return to the OpenStack Nova host: Generate a UUID for the secret, and save the UUID of the secret for configuring nova-compute later: Note You do not necessarily need the UUID on all the Nova compute nodes. However, from a platform consistency perspective, it's better to keep the same UUID. On the OpenStack Nova nodes, add the secret key to libvirt and remove the temporary copy of the key: Set and define the secret for libvirt : Additional Resources See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide for more details. See the Configuring the existing ceph storage cluster section in the Integrating an Overcloud with an Existing Red Hat Ceph Cluster Guide for Red Hat OpenStack Platform, to know more about user capabilities.
[ "ceph -s", "ceph osd pool create volumes 128 ceph osd pool create backups 128 ceph osd pool create images 128 ceph osd pool create vms 128", "dnf install python-rbd ceph-common", "dnf install python-rbd", "scp /etc/ceph/ceph.conf OPENSTACK_NODES :/etc/ceph", "ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'", "ceph auth get-or-create client.cinder | ssh CINDER_VOLUME_NODE sudo tee /etc/ceph/ceph.client.cinder.keyring ssh CINDER_VOLUME_NODE chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring ceph auth get-or-create client.cinder-backup | ssh CINDER_BACKUP_NODE tee /etc/ceph/ceph.client.cinder-backup.keyring ssh CINDER_BACKUP_NODE chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring ceph auth get-or-create client.glance | ssh GLANCE_API_NODE sudo tee /etc/ceph/ceph.client.glance.keyring ssh GLANCE_API_NODE chown glance:glance /etc/ceph/ceph.client.glance.keyring", "ceph auth get-or-create client.cinder | ssh NOVA_NODE tee /etc/ceph/ceph.client.cinder.keyring", "ceph auth get-key client.cinder | ssh NOVA_NODE tee client.cinder.key", "ceph auth caps client. ID mon 'allow r, allow command \"osd blacklist\"' osd ' EXISTING_OSD_USER_CAPS '", "ssh NOVA_NODE", "uuidgen > uuid-secret.txt", "cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>`cat uuid-secret.txt`</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF", "virsh secret-define --file secret.xml virsh secret-set-value --secret USD(cat uuid-secret.txt) --base64 USD(cat client.cinder.key) && rm client.cinder.key secret.xml" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/block_device_to_openstack_guide/installing-and-configuring-ceph-for-openstack
Appendix A. Kickstart script file format reference
Appendix A. Kickstart script file format reference This reference describes in detail the kickstart file format. A.1. Kickstart file format Kickstart scripts are plain text files that contain keywords recognized by the installation program, which serve as directions for the installation. Any text editor able to save files as ASCII text, such as Gedit or vim on Linux systems or Notepad on Windows systems, can be used to create and edit Kickstart files. The file name of your Kickstart configuration does not matter; however, it is recommended to use a simple name as you will need to specify this name later in other configuration files or dialogs. Commands Commands are keywords that serve as directions for installation. Each command must be on a single line. Commands can take options. Specifying commands and options is similar to using Linux commands in shell. Sections Certain special commands that begin with the percent % character start a section. Interpretation of commands in sections is different from commands placed outside sections. Every section must be finished with %end command. Section types The available sections are: Add-on sections . These sections use the %addon addon_name command. Package selection sections . Starts with %packages . Use it to list packages for installation, including indirect means such as package groups or modules. Script sections . These start with %pre , %pre-install , %post , and %onerror . These sections are not required. Command section The command section is a term used for the commands in the Kickstart file that are not part of any script section or %packages section. Script section count and ordering All sections except the command section are optional and can be present multiple times. When a particular type of script section is to be evaluated, all sections of that type present in the Kickstart are evaluated in order of appearance: two %post sections are evaluated one after another, in the order as they appear. However, you do not have to specify the various types of script sections in any order: it does not matter if there are %post sections before %pre sections. Comments Kickstart comments are lines starting with the hash # character. These lines are ignored by the installation program. Items that are not required can be omitted. Omitting any required item results in the installation program changing to the interactive mode so that the user can provide an answer to the related item, just as during a regular interactive installation. It is also possible to declare the kickstart script as non-interactive with the cmdline command. In non-interactive mode, any missing answer aborts the installation process. Note If user interaction is needed during kickstart installation in text or graphical mode, enter only the windows where updates are mandatory to complete the installation. Entering spokes might lead to resetting the kickstart configuration. Resetting of the configuration applies specifically to the kickstart commands related to storage after entering the Installation Destination window. A.2. Package selection in Kickstart Kickstart uses sections started by the %packages command for selecting packages to install. You can install packages, groups, environments, module streams, and module profiles this way. A.2.1. Package selection section Use the %packages command to begin a Kickstart section which describes the software packages to be installed. The %packages section must end with the %end command. You can specify packages by environment, group, module stream, module profile, or by their package names. Several environments and groups that contain related packages are defined. See the repository /repodata/*-comps- repository . architecture .xml file on the Red Hat Enterprise Linux 8 Installation DVD for a list of environments and groups. The *-comps- repository . architecture .xml file contains a structure describing available environments (marked by the <environment> tag) and groups (the <group> tag). Each entry has an ID, user visibility value, name, description, and package list. If the group is selected for installation, the packages marked mandatory in the package list are always installed, the packages marked default are installed if they are not specifically excluded elsewhere, and the packages marked optional must be specifically included elsewhere even when the group is selected. You can specify a package group or environment using either its ID (the <id> tag) or name (the <name> tag). If you are not sure what package should be installed, Red Hat recommends you to select the Minimal Install environment. Minimal Install provides only the packages which are essential for running Red Hat Enterprise Linux 8. This will substantially reduce the chance of the system being affected by a vulnerability. If necessary, additional packages can be added later after the installation. For more details on Minimal Install , see the Installing the Minimum Amount of Packages Required section of the Security Hardening document. The Initial Setup can not run after a system is installed from a Kickstart file unless a desktop environment and the X Window System were included in the installation and graphical login was enabled. Important To install a 32-bit package on a 64-bit system: specify the --multilib option for the %packages section append the package name with the 32-bit architecture for which the package was built; for example, glibc.i686 A.2.2. Package selection commands These commands can be used within the %packages section of a Kickstart file. Specifying an environment Specify an entire environment to be installed as a line starting with the @^ symbols: This installs all packages which are part of the Infrastructure Server environment. All available environments are described in the repository /repodata/*-comps- repository . architecture .xml file on the Red Hat Enterprise Linux 8 Installation DVD. Only a single environment should be specified in the Kickstart file. If more environments are specified, only the last specified environment is used. Specifying groups Specify groups, one entry to a line, starting with an @ symbol, and then the full group name or group id as given in the *-comps- repository . architecture .xml file. For example: The Core group is always selected - it is not necessary to specify it in the %packages section. Specifying individual packages Specify individual packages by name, one entry to a line. You can use the asterisk character ( * ) as a wildcard in package names. For example: The docbook* entry includes the packages docbook-dtds and docbook-style that match the pattern represented with the wildcard. Specifying profiles of module streams Specify profiles for module streams, one entry to a line, using the syntax for profiles: This installs all packages listed in the specified profile of the module stream. When a module has a default stream specified, you can leave it out. When the default stream is not specified, you must specify it. When a module stream has a default profile specified, you can leave it out. When the default profile is not specified, you must specify it. Installing a module multiple times with different streams is not possible. Installing multiple profiles of the same module and stream is possible. Modules and groups use the same syntax starting with the @ symbol. When a module and a package group exist with the same name, the module takes precedence. In Red Hat Enterprise Linux 8, modules are present only in the AppStream repository. To list available modules, use the yum module list command on an installed Red Hat Enterprise Linux 8 system. It is also possible to enable module streams using the module Kickstart command and then install packages contained in the module stream by naming them directly. Excluding environments, groups, or packages Use a leading dash ( - ) to specify packages or groups to exclude from the installation. For example: Important Installing all available packages using only * in a Kickstart file is not supported. You can change the default behavior of the %packages section by using several options. Some options work for the entire package selection, others are used with only specific groups. Additional resources Installing software Installing, managing, and removing user-space components A.2.3. Common package selection options The following options are available for the %packages sections. To use an option, append it to the start of the package selection section. For example: --default Install the default set of packages. This corresponds to the package set which would be installed if no other selections were made in the Package Selection screen during an interactive installation. --excludedocs Do not install any documentation contained within packages. In most cases, this excludes any files normally installed in the /usr/share/doc directory, but the specific files to be excluded depend on individual packages. --ignoremissing Ignore any packages, groups, module streams, module profiles, and environments missing in the installation source, instead of halting the installation to ask if the installation should be aborted or continued. --instLangs= Specify a list of languages to install. This is different from package group level selections. This option does not describe which package groups should be installed; instead, it sets RPM macros controlling which translation files from individual packages should be installed. --multilib Configure the installed system for multilib packages, to allow installing 32-bit packages on a 64-bit system, and install packages specified in this section as such. Normally, on an AMD64 and Intel 64 system, you can install only the x86_64 and the noarch packages. However, with the --multilib option, you can automatically install the 32-bit AMD and the i686 Intel system packages available, if any. This only applies to packages explicitly specified in the %packages section. Packages which are only being installed as dependencies without being specified in the Kickstart file are only installed in architecture versions in which they are needed, even if they are available for more architectures. User can configure Anaconda to install packages in multilib mode during the installation of the system. Use one of the following options to enable multilib mode: Configure Kickstart file with the following lines: Add the inst.multilib boot option during booting the installation image. --nocore Disables installation of the @Core package group which is otherwise always installed by default. Disabling the @Core package group with --nocore should be only used for creating lightweight containers; installing a desktop or server system with --nocore will result in an unusable system. Notes Using -@Core to exclude packages in the @Core package group does not work. The only way to exclude the @Core package group is with the --nocore option. The @Core package group is defined as a minimal set of packages needed for installing a working system. It is not related in any way to core packages as defined in the Package Manifest and Scope of Coverage Details . --excludeWeakdeps Disables installation of packages from weak dependencies. These are packages linked to the selected package set by Recommends and Supplements flags. By default weak dependencies will be installed. --retries= Sets the number of times YUM will attempt to download packages (retries). The default value is 10. This option only applies during the installation, and will not affect YUM configuration on the installed system. --timeout= Sets the YUM timeout in seconds. The default value is 30. This option only applies during the installation, and will not affect YUM configuration on the installed system. A.2.4. Options for specific package groups The options in this list only apply to a single package group. Instead of using them at the %packages command in the Kickstart file, append them to the group name. For example: --nodefaults Only install the group's mandatory packages, not the default selections. --optional Install packages marked as optional in the group definition in the *-comps- repository . architecture .xml file, in addition to installing the default selections. Some package groups, such as Scientific Support , do not have any mandatory or default packages specified - only optional packages. In this case the --optional option must always be used, otherwise no packages from this group will be installed. Important The --nodefaults and --optional options cannot be used together. You can install only mandatory packages during the installation using --nodefaults and install the optional packages on the installed system post installation. A.3. Scripts in Kickstart file A kickstart file can include the following scripts: %pre %pre-install %post This section provides the following details about the scripts: Execution time Types of commands that can be included in the script Purpose of the script Script options A.3.1. %pre script The %pre scripts are run on the system immediately after the Kickstart file has been loaded, but before it is completely parsed and installation begins. Each of these sections must start with %pre and end with %end . The %pre script can be used for activation and configuration of networking and storage devices. It is also possible to run scripts, using interpreters available in the installation environment. Adding a %pre script can be useful if you have networking and storage that needs special configuration before proceeding with the installation, or have a script that, for example, sets up additional logging parameters or environment variables. Debugging problems with %pre scripts can be difficult, so it is recommended only to use a %pre script when necessary. Important The %pre section of Kickstart is executed at the stage of installation which happens after the installer image ( inst.stage2 ) is fetched: it means after root switches to the installer environment (the installer image) and after the Anaconda installer itself starts. Then the configuration in %pre is applied and can be used to fetch packages from installation repositories configured, for example, by URL in Kickstart. However, it cannot be used to configure network to fetch the image ( inst.stage2 ) from network. Commands related to networking, storage, and file systems are available to use in the %pre script, in addition to most of the utilities in the installation environment /sbin and /bin directories. You can access the network in the %pre section. However, the name service has not been configured at this point, so only IP addresses work, not URLs. Note The pre script does not run in the chroot environment. A.3.1.1. %pre script section options The following options can be used to change the behavior of pre-installation scripts. To use an option, append it to the %pre line at the beginning of the script. For example: --interpreter= Allows you to specify a different scripting language, such as Python. Any scripting language available on the system can be used; in most cases, these are /usr/bin/sh , /usr/bin/bash , and /usr/libexec/platform-python . Note that the platform-python interpreter uses Python version 3.6. You must change your Python scripts from RHEL versions for the new path and version. Additionally, platform-python is meant for system tools: Use the python36 package outside the installation environment. For more details about Python in Red Hat Enterprise Linux, see\ Introduction to Python in Configuring basic system settings . --erroronfail Displays an error and halts the installation if the script fails. The error message will direct you to where the cause of the failure is logged. The installed system might get into an unstable and unbootable state. You can use the inst.nokill option to debug the script. --log= Logs the script's output into the specified log file. For example: A.3.2. %pre-install script The commands in the pre-install script are run after the following tasks are complete: System is partitioned Filesystems are created and mounted under /mnt/sysroot Network has been configured according to any boot options and kickstart commands Each of the %pre-install sections must start with %pre-install and end with %end . The %pre-install scripts can be used to modify the installation, and to add users and groups with guaranteed IDs before package installation. It is recommended to use the %post scripts for any modifications required in the installation. Use the %pre-install script only if the %post script falls short for the required modifications. The pre-install script does not run in chroot environment. A.3.2.1. %pre-install script section options The following options can be used to change the behavior of pre-install scripts. To use an option, append it to the %pre-install line at the beginning of the script. For example: You can have multiple %pre-install sections, with same or different interpreters. They are evaluated in their order of appearance in the Kickstart file. --interpreter= Allows you to specify a different scripting language, such as Python. Any scripting language available on the system can be used; in most cases, these are /usr/bin/sh , /usr/bin/bash , and /usr/libexec/platform-python . The platform-python interpreter uses Python version 3.6. You must change your Python scripts from RHEL versions for the new path and version. Additionally, platform-python is meant for system tools: Use the python36 package outside the installation environment. For more details about Python in Red Hat Enterprise Linux, see Introduction to Python in Configuring basic system settings . --erroronfail Displays an error and halts the installation if the script fails. The error message will direct you to where the cause of the failure is logged. The installed system might get into an unstable and unbootable state. You can use the inst.nokill option to debug the script. --log= Logs the script's output into the specified log file. For example: A.3.3. %post script The %post script is a post-installation script that is run after the installation is complete, but before the system is rebooted for the first time. You can use this section to run tasks such as system subscription. You have the option of adding commands to run on the system once the installation is complete, but before the system is rebooted for the first time. This section must start with %post and end with %end . The %post section is useful for functions such as installing additional software or configuring an additional name server. The post-install script is run in a chroot environment, therefore, performing tasks such as copying scripts or RPM packages from the installation media do not work by default. You can change this behavior using the --nochroot option as described below. Then the %post script will run in the installation environment, not in chroot on the installed target system. Because post-install script runs in a chroot environment, most systemctl commands will refuse to perform any action. During execution of the %post section, the installation media must be still inserted. A.3.3.1. %post script section options The following options can be used to change the behavior of post-installation scripts. To use an option, append it to the %post line at the beginning of the script. For example: --interpreter= Allows you to specify a different scripting language, such as Python. For example: Any scripting language available on the system can be used; in most cases, these are /usr/bin/sh , /usr/bin/bash , and /usr/libexec/platform-python . The platform-python interpreter uses Python version 3.6. You must change your Python scripts from RHEL versions for the new path and version. Additionally, platform-python is meant for system tools: Use the python36 package outside the installation environment. For more details about Python in Red Hat Enterprise Linux, see Introduction to Python in Configuring basic system settings . --nochroot Allows you to specify commands that you would like to run outside of the chroot environment. The following example copies the file /etc/resolv.conf to the file system that was just installed. --erroronfail Displays an error and halts the installation if the script fails. The error message will direct you to where the cause of the failure is logged. The installed system might get into an unstable and unbootable state. You can use the inst.nokill option to debug the script. --log= Logs the script's output into the specified log file. The path of the log file must take into account whether or not you use the --nochroot option. For example, without --nochroot : and with --nochroot : A.3.3.2. Example: Mounting NFS in a post-install script This example of a %post section mounts an NFS share and executes a script named runme located at /usr/new-machines/ on the share. The NFS file locking is not supported while in Kickstart mode, therefore the -o nolock option is required. A.3.3.3. Example: Running subscription-manager as a post-install script One of the most common uses of post-installation scripts in Kickstart installations is automatic registration of the installed system using Red Hat Subscription Manager. The following is an example of automatic subscription in a %post script: The subscription-manager command-line script registers a system to a Red Hat Subscription Management server (Customer Portal Subscription Management, Satellite 6, or CloudForms System Engine). This script can also be used to assign or attach subscriptions automatically to the system that best-match that system. When registering to the Customer Portal, use the Red Hat Network login credentials. When registering to Satellite 6 or CloudForms System Engine, you may also need to specify more subscription-manager options like --serverurl , --org , --environment as well as credentials provided by your local administrator. Note that credentials in the form of an --org --activationkey combination is a good way to avoid exposing --username --password values in shared kickstart files. Additional options can be used with the registration command to set a preferred service level for the system and to restrict updates and errata to a specific minor release version of RHEL for customers with Extended Update Support subscriptions that need to stay fixed on an older stream. For more information on using the subscription-manager command, see the Red Hat Knowledgebase solution How do I use subscription-manager in a kickstart file? . A.4. Anaconda configuration section Additional installation options can be configured in the %anaconda section of your Kickstart file. This section controls the behavior of the user interface of the installation system. This section must be placed towards the end of the Kickstart file, after Kickstart commands, and must start with %anaconda and end with %end . Currently, the only command that can be used in the %anaconda section is pwpolicy . Example A.1. Sample %anaconda script The following is an example %anaconda section: This example %anaconda section sets a password policy which requires that the root password be at least 10 characters long, and strictly forbids passwords which do not match this requirement. A.5. Kickstart error handling section Starting with Red Hat Enterprise Linux 7, Kickstart installations run custom scripts when any fatal error encounters in the installation program. Example scenarios include an error in a package that has been requested for installation, VNC fails to start if specified in the configuration, or an error while scanning storage devices. In case of such events, installation aborts. To analyze these events, the installation program runs all %onerror scripts chronologically as provided in the Kickstart file. In the event of traceback, you can run the %onerror scripts. Each %onerror script is required to end with %end . You can enforce the error handler for any error by using inst.cmdline to make every error a fatal error. Error handling sections accept the following options: --erroronfail Displays an error and halts the installation if the script fails. The error message will direct you to where the cause of the failure is logged. The installed system might get into an unstable and unbootable state. You can use the inst.nokill option to debug the script. --interpreter= Allows you to specify a different scripting language, such as Python. For example: Any scripting language available on the system can be used; in most cases, these are /usr/bin/sh , /usr/bin/bash , and /usr/libexec/platform-python . The platform-python interpreter uses Python version 3.6. You must change your Python scripts from RHEL versions for the new path and version. Additionally, platform-python is meant for system tools: Use the python36 package outside the installation environment. For more details about Python in Red Hat Enterprise Linux, see Introduction to Python in Configuring basic system settings . --log= Logs the script's output into the specified log file. A.6. Kickstart add-on sections Starting with Red Hat Enterprise Linux 7, Kickstart installations support add-ons. These add-ons can expand the basic Kickstart (and Anaconda) functionality in many ways. To use an add-on in your Kickstart file, use the %addon addon_name options command, and finish the command with an %end statement, similar to pre-installation and post-installation script sections. For example, if you want to use the Kdump add-on, which is distributed with Anaconda by default, use the following commands: The %addon command does not include any options of its own - all options are dependent on the actual add-on.
[ "%packages @^Infrastructure Server %end", "%packages @X Window System @Desktop @Sound and Video %end", "%packages sqlite curl aspell docbook* %end", "%packages @module:stream/profile %end", "%packages -@Graphical Administration Tools -autofs -ipa*compat %end", "%packages --multilib --ignoremissing", "%packages --multilib --default %end", "%packages @Graphical Administration Tools --optional %end", "%pre --interpreter=/usr/libexec/platform-python -- Python script omitted -- %end", "%pre --log=/tmp/ks-pre.log", "%pre-install --interpreter=/usr/libexec/platform-python -- Python script omitted -- %end", "%pre-install --log=/mnt/sysroot/root/ks-pre.log", "%post --interpreter=/usr/libexec/platform-python -- Python script omitted -- %end", "%post --interpreter=/usr/libexec/platform-python", "%post --nochroot cp /etc/resolv.conf /mnt/sysroot/etc/resolv.conf %end", "%post --log=/root/ks-post.log", "%post --nochroot --log=/mnt/sysroot/root/ks-post.log", "Start of the %post section with logging into /root/ks-post.log %post --log=/root/ks-post.log Mount an NFS share mkdir /mnt/temp mount -o nolock 10.10.0.2:/usr/new-machines /mnt/temp openvt -s -w -- /mnt/temp/runme umount /mnt/temp End of the %post section %end", "%post --log=/root/ks-post.log subscription-manager register [email protected] --password=secret --auto-attach %end", "%anaconda pwpolicy root --minlen=10 --strict %end", "%onerror --interpreter=/usr/libexec/platform-python", "%addon com_redhat_kdump --enable --reserve-mb=auto %end" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automatically_installing_rhel/kickstart-script-file-format-reference_rhel-installer
Chapter 10. Understanding and creating service accounts
Chapter 10. Understanding and creating service accounts 10.1. Service accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. For example, service accounts can allow: Replication controllers to make API calls to create or delete pods. Applications inside containers to make API calls for discovery purposes. External applications to make API calls for monitoring or integration purposes. Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. Each service account automatically contains two secrets: An API token Credentials for the OpenShift Container Registry The generated API token and registry credentials do not expire, but you can revoke them by deleting the secret. When you delete the secret, a new one is automatically generated to take its place. 10.2. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none> 10.3. Examples of granting roles to service accounts You can grant roles to service accounts in the same way that you grant roles to a regular user account. You can modify the service accounts for the current project. For example, to add the view role to the robot service account in the top-secret project: USD oc policy add-role-to-user view system:serviceaccount:top-secret:robot Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret You can also grant access to a specific service account in a project. For example, from the project to which the service account belongs, use the -z flag and specify the <service_account_name> USD oc policy add-role-to-user <role_name> -z <service_account_name> Important If you want to grant access to a specific service account in a project, use the -z flag. Using this flag helps prevent typos and ensures that access is granted to only the specified service account. Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name> To modify a different namespace, you can use the -n option to indicate the project namespace it applies to, as shown in the following examples. For example, to allow all service accounts in all projects to view resources in the my-project project: USD oc policy add-role-to-group view system:serviceaccounts -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts To allow all service accounts in the managers project to edit resources in the my-project project: USD oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers
[ "system:serviceaccount:<project>:<name>", "oc get sa", "NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d", "oc create sa <service_account_name> 1", "serviceaccount \"robot\" created", "apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>", "oc describe sa robot", "Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none>", "oc policy add-role-to-user view system:serviceaccount:top-secret:robot", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret", "oc policy add-role-to-user <role_name> -z <service_account_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name>", "oc policy add-role-to-group view system:serviceaccounts -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts", "oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/authentication_and_authorization/understanding-and-creating-service-accounts
Chapter 1. Overview of the OpenShift Data Foundation update process
Chapter 1. Overview of the OpenShift Data Foundation update process This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.12 and 4.13, or between z-stream updates like 4.13.0 and 4.13.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use. For updating between minor releases , see Updating Red Hat OpenShift Data Foundation 4.12 to 4.13 . For updating between z-stream releases , see Updating Red Hat OpenShift Data Foundation 4.13.x to 4.13.y . For updating external mode deployments , you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret . If you use local storage, then update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Important If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances. After the upgrade, you must run step 1 of the workaround for BZ#2215462 as documented in the DR upgrade Known issues section of Release notes . Update considerations Review the following important considerations before you begin. The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode . The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. The flexible scaling feature is available only in new deployments of OpenShift Data Foundation. For more information, see Scaling storage guide .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/updating_openshift_data_foundation/overview-of-the-openshift-data-foundation-update-process_rhodf
Chapter 6. Generic ephemeral volumes
Chapter 6. Generic ephemeral volumes 6.1. Overview Generic ephemeral volumes are a type of ephemeral volume that can be provided by all storage drivers that support persistent volumes and dynamic provisioning. Generic ephemeral volumes are similar to emptyDir volumes in that they provide a per-pod directory for scratch data, which is usually empty after provisioning. Generic ephemeral volumes are specified inline in the pod spec and follow the pod's lifecycle. They are created and deleted along with the pod. Generic ephemeral volumes have the following features: Storage can be local or network-attached. Volumes can have a fixed size that pods are not able to exceed. Volumes might have some initial data, depending on the driver and parameters. Typical operations on volumes are supported, assuming that the driver supports them, including snapshotting, cloning, resizing, and storage capacity tracking. Note Generic ephemeral volumes do not support offline snapshots and resize. Due to this limitation, the following Container Storage Interface (CSI) drivers do not support the following features for generic ephemeral volumes: Azure Disk CSI driver does not support resize. Cinder CSI driver does not support snapshot. 6.2. Lifecycle and persistent volume claims The parameters for a volume claim are allowed inside a volume source of a pod. Labels, annotations, and the whole set of fields for persistent volume claims (PVCs) are supported. When such a pod is created, the ephemeral volume controller then creates an actual PVC object (from the template shown in the Creating generic ephemeral volumes procedure) in the same namespace as the pod, and ensures that the PVC is deleted when the pod is deleted. This triggers volume binding and provisioning in one of two ways: Either immediately, if the storage class uses immediate volume binding. With immediate binding, the scheduler is forced to select a node that has access to the volume after it is available. When the pod is tentatively scheduled onto a node ( WaitForFirstConsumervolume binding mode). This volume binding option is recommended for generic ephemeral volumes because then the scheduler can choose a suitable node for the pod. In terms of resource ownership, a pod that has generic ephemeral storage is the owner of the PVCs that provide that ephemeral storage. When the pod is deleted, the Kubernetes garbage collector deletes the PVC, which then usually triggers deletion of the volume because the default reclaim policy of storage classes is to delete volumes. You can create quasi-ephemeral local storage by using a storage class with a reclaim policy of retain: the storage outlives the pod, and in this case, you must ensure that volume clean-up happens separately. While these PVCs exist, they can be used like any other PVC. In particular, they can be referenced as data source in volume cloning or snapshotting. The PVC object also holds the current status of the volume. Additional resources Creating generic ephemeral volumes 6.3. Security You can enable the generic ephemeral volume feature to allows users who can create pods to also create persistent volume claims (PVCs) indirectly. This feature works even if these users do not have permission to create PVCs directly. Cluster administrators must be aware of this. If this does not fit your security model, use an admission webhook that rejects objects such as pods that have a generic ephemeral volume. The normal namespace quota for PVCs still applies, so even if users are allowed to use this new mechanism, they cannot use it to circumvent other policies. 6.4. Persistent volume claim naming Automatically created persistent volume claims (PVCs) are named by a combination of the pod name and the volume name, with a hyphen (-) in the middle. This naming convention also introduces a potential conflict between different pods, and between pods and manually created PVCs. For example, pod-a with volume scratch and pod with volume a-scratch both end up with the same PVC name, pod-a-scratch . Such conflicts are detected, and a PVC is only used for an ephemeral volume if it was created for the pod. This check is based on the ownership relationship. An existing PVC is not overwritten or modified, but this does not resolve the conflict. Without the right PVC, a pod cannot start. Important Be careful when naming pods and volumes inside the same namespace so that naming conflicts do not occur. 6.5. Creating generic ephemeral volumes Procedure Create the pod object definition and save it to a file. Include the generic ephemeral volume information in the file. my-example-pod-with-generic-vols.yaml kind: Pod apiVersion: v1 metadata: name: my-app spec: containers: - name: my-frontend image: busybox:1.28 volumeMounts: - mountPath: "/mnt/storage" name: data command: [ "sleep", "1000000" ] volumes: - name: data 1 ephemeral: volumeClaimTemplate: metadata: labels: type: my-app-ephvol spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "gp2-csi" resources: requests: storage: 1Gi 1 Generic ephemeral volume claim.
[ "kind: Pod apiVersion: v1 metadata: name: my-app spec: containers: - name: my-frontend image: busybox:1.28 volumeMounts: - mountPath: \"/mnt/storage\" name: data command: [ \"sleep\", \"1000000\" ] volumes: - name: data 1 ephemeral: volumeClaimTemplate: metadata: labels: type: my-app-ephvol spec: accessModes: [ \"ReadWriteOnce\" ] storageClassName: \"gp2-csi\" resources: requests: storage: 1Gi" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/storage/generic-ephemeral-volumes
Chapter 4. Using virtual list view control to request a contiguous subset of a large search result
Chapter 4. Using virtual list view control to request a contiguous subset of a large search result Directory Server supports the LDAP virtual list view control. This control enables an LDAP client to request a contiguous subset of a large search result. For example, you have stored an address book with 100.000 entries in Directory Server. By default, a query for all entries returns all entries at once. This is a resource and time-consuming operation, and clients often do not require the whole data set because, if the user scrolls through the results, only a partial set is visible. However, if the client uses the VLV control, the server only returns a subset and, for example, if the user scrolls in the client application, the server returns more entries. This reduces the load on the server, and the client does not need to store and process all data at once. VLV also improves the performance of server-sorted searches when all search parameters are fixed. Directory Server pre-computes the search results within the VLV index. Therefore, the VLV index is much more efficient than retrieving the results and sorting them afterwards. In Directory Server, the VLV control is always available. However, if you use it in a large directory, a VLV index, also called browsing index, can significantly improve the speed. Directory Server does not maintain VLV indexes for attributes, such as for standard indexes. The server generates VLV indexes dynamically based on attributes set in entries and the location of those entries in the directory tree. Unlike standard entries, VLV entries are special entries in the database. 4.1. How the VLV control works in ldapsearch commands Typically, you use the virtual list view (VLV) feature in LDAP client applications. However, for example for testing purposes, you can use the ldapsearch utility to request only partial results. To use the VLV feature in ldapsearch commands, specify the -E option for both the sss (server-side sorting) and vlv search extensions: # ldapsearch ... -E 'sss= attribute_list ' -E 'vlv= query_options ' The sss search extension has the following syntax: [!]sss=[-]<attr[:OID]>[/[-]<attr[:OID]>...] The vlv search extension has the following syntax: [!]vlv=<before>/<after>(/<offset>/<count>|:<value>) before sets the number of entries returned before the targeted one. after sets the number of entries returned after the targeted one. index , count , and value help to determine the target entry. If you set value , the target entry is the first one having its first sorting attribute starting with the value. Otherwise, you set count to 0 , and the target entry is determined by the index value (starting from 1). If the count value is higher than 0 , the target entry is determined by the ratio index * number of entries / count . Example 4.1. Output of an ldapsearch command with VLV search extension The following command searches in ou=People,dc=example,dc=com . The server then sorts the results by the cn attribute and returns the uid attributes of the 70th entry together with one entry before and two entries after the offset. # ldapsearch -D " cn=Directory Manager " -W -H ldap://server.example.com -b "ou=People,dc=example,dc=com" -s one -x -E 'sss=cn' -E 'vlv=1/2/70/0' uid # user069, People, example.com dn: uid=user069,ou=People,dc=example,dc=com uid: user069 # user070, People, example.com dn: uid=user070,ou=People,dc=example,dc=com uid: user070 # user071, People, example.com dn: uid=user071,ou=People,dc=example,dc=com uid: user071 # user072, People, example.com dn: uid=user072,ou=People,dc=example,dc=com uid: user072 # search result search: 2 result: 0 Success control: 1.2.840.113556.1.4.474 false MIQAAAADCgEA sortResult: (0) Success control: 2.16.840.1.113730.3.4.10 false MIQAAAALAgFGAgMAnaQKAQA= vlvResult: pos=70 count=40356 context= (0) Success # numResponses: 5 # numEntries: 4 Press [before/after(/offset/count|:value)] Enter for the window. Additional resources The -E parameter description in the ldapsearch(1) man page. 4.2. Enabling unauthenticated users to use the VLV control By default, the access control instruction (ACI) in the oid=2.16.840.1.113730.3.4.9,cn=features,cn=config entry enables only authenticated users to use the VLV control. To enable also non-authenticated users to use the VLV control, update the ACI by changing userdn = "ldap:///all" to userdn = "ldap:///anyone" Procedure Update the ACI in oid=2.16.840.1.113730.3.4.9,cn=features,cn=config : # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: oid=2.16.840.1.113730.3.4.9,cn=features,cn=config changetype: modify replace: aci aci: (targetattr != "aci")(version 3.0; acl "VLV Request Control"; allow( read, search, compare, proxy ) userdn = "ldap:///anyone";) Verification Perform a query with VLV control not specify a bind user: # ldapsearch -H ldap://server.example.com -b " ou=People,dc=example,dc=com " -s one -x -E 'sss= cn ' -E 'vlv= 1/2/70/0 ' uid This command requires that the server allows anonymous binds. If the command succeeds but returns no entries, run the query again with a bind user to ensure that the query works when using authentication. Additional resources Disabling anonymous binds 4.3. Creating a VLV index using the command line to improve the speed of VLV queries Follow this procedure to create a virtual list view (VLV) index, also called browsing index, for entries in ou=People,dc=example,dc=com that contain a mail attribute and have the objectClass attribute set to person . Prerequisites Your client applications use the VLV control. Client applications require to query a contiguous subset of a large search result. The directory contains a large number of entries. Procedure Create the VLV search entry: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend vlv-index add-search --name " VLV People " --search-base " ou=People,dc=example,dc=com " --search-filter " (&(objectClass=person)(mail=*)) " --search-scope 2 userRoot This command uses the following options: --name sets the name of the search entry. This can be any name. --search-base sets the base DN for the VLV index. Directory Server creates the VLV index on this entry. --search-scope sets the scope of the search to run for entries in the VLV index. You can set this option to 0 (base search), 1 (one-level search), or 2 (subtree search). --search-filter sets the filter Directory Server applies when it creates the VLV index. Only entries that match this filter become part of the index. userRoot is the name of the database in which to create the entry. Create the index entry: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend vlv-index add-index --index-name " VLV People - cn sn " --parent-name " VLV People " --sort " cn sn " --index-it userRoot This command uses the following options: --index-name sets the name of the index entry. This can be any name. --parent-name sets the name of the VLV search entry and must match the name you set in the step. --sort sets the attribute names and their sort order. Separate the attributes by space. --index-it causes that Directory Server automatically starts an index task after the entry was created. userRoot is the name of the database in which to create the entry. Verification Verify the successful creation of the VLV index in the /var/log/dirsrv/slapd- instance_name /errors file: [26/Nov/2021:11:32:59.001988040 +0100] - INFO - bdb_db2index - userroot: Indexing VLV: VLV People - cn sn [26/Nov/2021:11:32:59.507092414 +0100] - INFO - bdb_db2index - userroot: Indexed 1000 entries (2%). ... [26/Nov/2021:11:33:21.450916820 +0100] - INFO - bdb_db2index - userroot: Indexed 40000 entries (98%). [26/Nov/2021:11:33:21.671564324 +0100] - INFO - bdb_db2index - userroot: Finished indexing. Use the VLV control in an ldapsearch command to query only specific records from the directory: # ldapsearch -D " cn=Directory Manager " -W -H ldap://server.example.com -b "ou=People,dc=example,dc=com" -s one -x -E 'sss=cn' -E 'vlv=1/2/70/0' uid # user069, People, example.com dn: uid=user069,ou=People,dc=example,dc=com cn: user069 # user070, People, example.com dn: uid=user070,ou=People,dc=example,dc=com cn: user070 # user071, People, example.com dn: uid=user071,ou=People,dc=example,dc=com cn: user071 # user072, People, example.com dn: uid=user072,ou=People,dc=example,dc=com cn: user072 This example assumes you have entries continuously named uid=user001 to at least uid=user072 in ou=People,dc=example,dc=com . Additional resources The -E parameter description in the ldapsearch(1) man page. The VLV control in ldapsearch commands 4.4. Creating a VLV index using the web console to improve the speed of VLV queries Follow this procedure to create a virtual list view (VLV) index, also called browsing index, for entries in ou=People,dc=example,dc=com that contain a mail attribute and have the objectClass attribute set to person . Prerequisites You are logged in to the instance in the web console. Your client applications use the VLV control. Client applications require to query a contiguous subset of a large search result. The directory contains a large number of entries. Procedure Navigate to Database Suffixes dc=example,dc=com VLV Indexes . Click Create VLV Index , and fill the fields: VLV Index Name : The name of the search entry. This can be any name. Search base : The base DN for the VLV index. Directory Server creates the VLV index on this entry. Search Filter : The filter Directory Server applies when it creates the VLV index. Only entries that match this filter become part of the index. Search Scope : The scope of the search to run for entries in the VLV index. Click Save VLV Index . Click Create Sort Index Enter the attribute names, and select Reindex After Saving . Click Create Sort Index . Verification Navigate to Monitoring Logging Errors Log and verify the successful creation of the VLV index: [26/Nov/2021:11:32:59.001988040 +0100] - INFO - bdb_db2index - userroot: Indexing VLV: VLV People - cn sn [26/Nov/2021:11:32:59.507092414 +0100] - INFO - bdb_db2index - userroot: Indexed 1000 entries (2%). ... [26/Nov/2021:11:33:21.450916820 +0100] - INFO - bdb_db2index - userroot: Indexed 40000 entries (98%). [26/Nov/2021:11:33:21.671564324 +0100] - INFO - bdb_db2index - userroot: Finished indexing. Use the VLV control in an ldapsearch command to query only specific records from the directory: # ldapsearch -D " cn=Directory Manager " -W -H ldap://server.example.com -b "ou=People,dc=example,dc=com" -s one -x -E 'sss=cn' -E 'vlv=1/2/70/0' uid # user069, People, example.com dn: uid=user069,ou=People,dc=example,dc=com cn: user069 # user070, People, example.com dn: uid=user070,ou=People,dc=example,dc=com cn: user070 # user071, People, example.com dn: uid=user071,ou=People,dc=example,dc=com cn: user071 # user072, People, example.com dn: uid=user072,ou=People,dc=example,dc=com cn: user072 This example assumes you have entries continuously named uid=user001 to at least uid=user072 in ou=People,dc=example,dc=com . Additional resources The -E parameter description in the ldapsearch(1) man page. The VLV control in ldapsearch commands
[ "ldapsearch ... -E 'sss= attribute_list ' -E 'vlv= query_options '", "[!]sss=[-]<attr[:OID]>[/[-]<attr[:OID]>...]", "[!]vlv=<before>/<after>(/<offset>/<count>|:<value>)", "ldapsearch -D \" cn=Directory Manager \" -W -H ldap://server.example.com -b \"ou=People,dc=example,dc=com\" -s one -x -E 'sss=cn' -E 'vlv=1/2/70/0' uid user069, People, example.com dn: uid=user069,ou=People,dc=example,dc=com uid: user069 user070, People, example.com dn: uid=user070,ou=People,dc=example,dc=com uid: user070 user071, People, example.com dn: uid=user071,ou=People,dc=example,dc=com uid: user071 user072, People, example.com dn: uid=user072,ou=People,dc=example,dc=com uid: user072 search result search: 2 result: 0 Success control: 1.2.840.113556.1.4.474 false MIQAAAADCgEA sortResult: (0) Success control: 2.16.840.1.113730.3.4.10 false MIQAAAALAgFGAgMAnaQKAQA= vlvResult: pos=70 count=40356 context= (0) Success numResponses: 5 numEntries: 4 Press [before/after(/offset/count|:value)] Enter for the next window.", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: oid=2.16.840.1.113730.3.4.9,cn=features,cn=config changetype: modify replace: aci aci: (targetattr != \"aci\")(version 3.0; acl \"VLV Request Control\"; allow( read, search, compare, proxy ) userdn = \"ldap:///anyone\";)", "ldapsearch -H ldap://server.example.com -b \" ou=People,dc=example,dc=com \" -s one -x -E 'sss= cn ' -E 'vlv= 1/2/70/0 ' uid", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend vlv-index add-search --name \" VLV People \" --search-base \" ou=People,dc=example,dc=com \" --search-filter \" (&(objectClass=person)(mail=*)) \" --search-scope 2 userRoot", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend vlv-index add-index --index-name \" VLV People - cn sn \" --parent-name \" VLV People \" --sort \" cn sn \" --index-it userRoot", "[26/Nov/2021:11:32:59.001988040 +0100] - INFO - bdb_db2index - userroot: Indexing VLV: VLV People - cn sn [26/Nov/2021:11:32:59.507092414 +0100] - INFO - bdb_db2index - userroot: Indexed 1000 entries (2%). [26/Nov/2021:11:33:21.450916820 +0100] - INFO - bdb_db2index - userroot: Indexed 40000 entries (98%). [26/Nov/2021:11:33:21.671564324 +0100] - INFO - bdb_db2index - userroot: Finished indexing.", "ldapsearch -D \" cn=Directory Manager \" -W -H ldap://server.example.com -b \"ou=People,dc=example,dc=com\" -s one -x -E 'sss=cn' -E 'vlv=1/2/70/0' uid user069, People, example.com dn: uid=user069,ou=People,dc=example,dc=com cn: user069 user070, People, example.com dn: uid=user070,ou=People,dc=example,dc=com cn: user070 user071, People, example.com dn: uid=user071,ou=People,dc=example,dc=com cn: user071 user072, People, example.com dn: uid=user072,ou=People,dc=example,dc=com cn: user072", "[26/Nov/2021:11:32:59.001988040 +0100] - INFO - bdb_db2index - userroot: Indexing VLV: VLV People - cn sn [26/Nov/2021:11:32:59.507092414 +0100] - INFO - bdb_db2index - userroot: Indexed 1000 entries (2%). [26/Nov/2021:11:33:21.450916820 +0100] - INFO - bdb_db2index - userroot: Indexed 40000 entries (98%). [26/Nov/2021:11:33:21.671564324 +0100] - INFO - bdb_db2index - userroot: Finished indexing.", "ldapsearch -D \" cn=Directory Manager \" -W -H ldap://server.example.com -b \"ou=People,dc=example,dc=com\" -s one -x -E 'sss=cn' -E 'vlv=1/2/70/0' uid user069, People, example.com dn: uid=user069,ou=People,dc=example,dc=com cn: user069 user070, People, example.com dn: uid=user070,ou=People,dc=example,dc=com cn: user070 user071, People, example.com dn: uid=user071,ou=People,dc=example,dc=com cn: user071 user072, People, example.com dn: uid=user072,ou=People,dc=example,dc=com cn: user072" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/managing_indexes/assembly_using-the-virtual-list-view-control-to-request-a-contiguous-subset-of-a-large-search-result_managing-indexes
Chapter 6. alarm
Chapter 6. alarm This chapter describes the commands under the alarm command. 6.1. alarm create Create an alarm Usage: Table 6.1. Command arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm -t <TYPE>, --type <TYPE> Type of alarm, should be one of: prometheus, event, composite, threshold, gnocchi_resources_threshold, gnocchi_aggregation_by_metrics_threshold, gnocchi_aggregation_by_resources_threshold, loadbalancer_member_health. --project-id <PROJECT_ID> Project to associate with alarm (configurable by admin users only) --user-id <USER_ID> User to associate with alarm (configurable by admin users only) --description <DESCRIPTION> Free text description of the alarm --state <STATE> State of the alarm, one of: [ ok , alarm , insufficient data ] --severity <SEVERITY> Severity of the alarm, one of: [ low , moderate , critical ] --enabled {True|False} True if alarm evaluation is enabled --alarm-action <Webhook URL> Url to invoke when state transitions to alarm. may be used multiple times --ok-action <Webhook URL> Url to invoke when state transitions to ok. may be used multiple times --insufficient-data-action <Webhook URL> Url to invoke when state transitions to insufficient data. May be used multiple times --time-constraint <Time Constraint> Only evaluate the alarm if the time at evaluation is within this time constraint. Start point(s) of the constraint are specified with a cron expression, whereas its duration is given in seconds. Can be specified multiple times for multiple time constraints, format is: name=<CONSTRAINT_NAME>;start=< CRON>;duration=<SECONDS>;[description=<DESCRIPTION>;[t imezone=<IANA Timezone>]] --repeat-actions {True|False} True if actions should be repeatedly notified while alarm remains in target state Table 6.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 6.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 6.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. Table 6.6. common alarm rules Value Summary --query <QUERY> For alarms of type threshold or event: key[op]data_type::value; list. data_type is optional, but if supplied must be string, integer, float, or boolean. For alarms of type gnocchi_aggregation_by_resources_threshold: need to specify a complex query json string, like: {"and": [{"=": {"ended_at": null}}, ... ]}. For alarms of type prometheus this should be valid PromQL query. --comparison-operator <OPERATOR> Operator to compare with, one of: [ lt , le , eq , ne , ge , gt ] --evaluation-periods <EVAL_PERIODS> Number of periods to evaluate over --threshold <THRESHOLD> Threshold to evaluate against. Table 6.7. event alarm Value Summary --event-type <EVENT_TYPE> Event type to evaluate against Table 6.8. threshold alarm Value Summary -m <METER NAME>, --meter-name <METER NAME> Meter to evaluate against --period <PERIOD> Length of each period (seconds) to evaluate over. --statistic <STATISTIC> Statistic to evaluate, one of: [ max , min , avg , sum , count ] Table 6.9. common gnocchi alarm rules Value Summary --granularity <GRANULARITY> The time range in seconds over which to query. --aggregation-method <AGGR_METHOD> The aggregation_method to compare to the threshold. --metric <METRIC>, --metrics <METRIC> The metric id or name depending of the alarm type Table 6.10. gnocchi resource threshold alarm Value Summary --resource-type <RESOURCE_TYPE> The type of resource. --resource-id <RESOURCE_ID> The id of a resource. Table 6.11. composite alarm Value Summary --composite-rule <COMPOSITE_RULE> Composite threshold rule with json format, the form can be a nested dict which combine threshold/gnocchi rules by "and", "or". For example, the form is like: {"or":[RULE1, RULE2, {"and": [RULE3, RULE4]}]}, The RULEx can be basic threshold rules but must include a "type" field, like this: {"threshold": 0.8,"meter_name":"cpu_util","type":"threshold"} Table 6.12. loadbalancer member health alarm Value Summary --stack-id <STACK_NAME_OR_ID> Name or id of the root / top level heat stack containing the loadbalancer pool and members. An update will be triggered on the root Stack if an unhealthy member is detected in the loadbalancer pool. --pool-id <LOADBALANCER_POOL_NAME_OR_ID> Name or id of the loadbalancer pool for which the health of each member will be evaluated. --autoscaling-group-id <AUTOSCALING_GROUP_NAME_OR_ID> Id of the heat autoscaling group that contains the loadbalancer members. Unhealthy members will be marked as such before an update is triggered on the root stack. 6.2. alarm delete Delete an alarm Usage: Table 6.13. Positional arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 6.14. Command arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm 6.3. alarm-history search Show history for all alarms based on query Usage: Table 6.15. Command arguments Value Summary -h, --help Show this help message and exit --query QUERY Rich query supported by aodh, e.g. project_id!=my-id user_id=foo or user_id=bar Table 6.16. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 6.17. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 6.18. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 6.4. alarm-history show Show history for an alarm Usage: Table 6.20. Positional arguments Value Summary <alarm-id> Id of an alarm Table 6.21. Command arguments Value Summary -h, --help Show this help message and exit --limit <LIMIT> Number of resources to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value,the supported marker is event_id. --sort <SORT_KEY:SORT_DIR> Sort of resource attribute. e.g. timestamp:desc Table 6.22. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 6.23. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 6.24. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 6.5. alarm list List alarms Usage: Table 6.26. Command arguments Value Summary -h, --help Show this help message and exit --query QUERY Rich query supported by aodh, e.g. project_id!=my-id user_id=foo or user_id=bar --filter <KEY1=VALUE1;KEY2=VALUE2... > Filter parameters to apply on returned alarms. --limit <LIMIT> Number of resources to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value,the supported marker is alarm_id. --sort <SORT_KEY:SORT_DIR> Sort of resource attribute, e.g. name:asc Table 6.27. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 6.28. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 6.29. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.30. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 6.6. alarm quota set Command base class for displaying data about a single object. Usage: Table 6.31. Positional arguments Value Summary project Project id. Table 6.32. Command arguments Value Summary -h, --help Show this help message and exit --alarm ALARM New value for the alarm quota. value -1 means unlimited. Table 6.33. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 6.34. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.35. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 6.36. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 6.7. alarm quota show Show quota for a project Usage: Table 6.37. Command arguments Value Summary -h, --help Show this help message and exit --project PROJECT Project id. if not specified, get quota for the current project. Table 6.38. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 6.39. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.40. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 6.41. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 6.8. alarm show Show an alarm Usage: Table 6.42. Positional arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 6.43. Command arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm Table 6.44. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 6.45. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.46. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 6.47. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 6.9. alarm state get Get state of an alarm Usage: Table 6.48. Positional arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 6.49. Command arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm Table 6.50. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 6.51. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.52. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 6.53. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 6.10. alarm state set Set state of an alarm Usage: Table 6.54. Positional arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 6.55. Command arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm --state <STATE> State of the alarm, one of: [ ok , alarm , insufficient data ] Table 6.56. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 6.57. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.58. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 6.59. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 6.11. alarm update Update an alarm Usage: Table 6.60. Positional arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 6.61. Command arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm -t <TYPE>, --type <TYPE> Type of alarm, should be one of: prometheus, event, composite, threshold, gnocchi_resources_threshold, gnocchi_aggregation_by_metrics_threshold, gnocchi_aggregation_by_resources_threshold, loadbalancer_member_health. --project-id <PROJECT_ID> Project to associate with alarm (configurable by admin users only) --user-id <USER_ID> User to associate with alarm (configurable by admin users only) --description <DESCRIPTION> Free text description of the alarm --state <STATE> State of the alarm, one of: [ ok , alarm , insufficient data ] --severity <SEVERITY> Severity of the alarm, one of: [ low , moderate , critical ] --enabled {True|False} True if alarm evaluation is enabled --alarm-action <Webhook URL> Url to invoke when state transitions to alarm. may be used multiple times --ok-action <Webhook URL> Url to invoke when state transitions to ok. may be used multiple times --insufficient-data-action <Webhook URL> Url to invoke when state transitions to insufficient data. May be used multiple times --time-constraint <Time Constraint> Only evaluate the alarm if the time at evaluation is within this time constraint. Start point(s) of the constraint are specified with a cron expression, whereas its duration is given in seconds. Can be specified multiple times for multiple time constraints, format is: name=<CONSTRAINT_NAME>;start=< CRON>;duration=<SECONDS>;[description=<DESCRIPTION>;[t imezone=<IANA Timezone>]] --repeat-actions {True|False} True if actions should be repeatedly notified while alarm remains in target state Table 6.62. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 6.63. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.64. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 6.65. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. Table 6.66. common alarm rules Value Summary --query <QUERY> For alarms of type threshold or event: key[op]data_type::value; list. data_type is optional, but if supplied must be string, integer, float, or boolean. For alarms of type gnocchi_aggregation_by_resources_threshold: need to specify a complex query json string, like: {"and": [{"=": {"ended_at": null}}, ... ]}. For alarms of type prometheus this should be valid PromQL query. --comparison-operator <OPERATOR> Operator to compare with, one of: [ lt , le , eq , ne , ge , gt ] --evaluation-periods <EVAL_PERIODS> Number of periods to evaluate over --threshold <THRESHOLD> Threshold to evaluate against. Table 6.67. event alarm Value Summary --event-type <EVENT_TYPE> Event type to evaluate against Table 6.68. threshold alarm Value Summary -m <METER NAME>, --meter-name <METER NAME> Meter to evaluate against --period <PERIOD> Length of each period (seconds) to evaluate over. --statistic <STATISTIC> Statistic to evaluate, one of: [ max , min , avg , sum , count ] Table 6.69. common gnocchi alarm rules Value Summary --granularity <GRANULARITY> The time range in seconds over which to query. --aggregation-method <AGGR_METHOD> The aggregation_method to compare to the threshold. --metric <METRIC>, --metrics <METRIC> The metric id or name depending of the alarm type Table 6.70. gnocchi resource threshold alarm Value Summary --resource-type <RESOURCE_TYPE> The type of resource. --resource-id <RESOURCE_ID> The id of a resource. Table 6.71. composite alarm Value Summary --composite-rule <COMPOSITE_RULE> Composite threshold rule with json format, the form can be a nested dict which combine threshold/gnocchi rules by "and", "or". For example, the form is like: {"or":[RULE1, RULE2, {"and": [RULE3, RULE4]}]}, The RULEx can be basic threshold rules but must include a "type" field, like this: {"threshold": 0.8,"meter_name":"cpu_util","type":"threshold"} Table 6.72. loadbalancer member health alarm Value Summary --stack-id <STACK_NAME_OR_ID> Name or id of the root / top level heat stack containing the loadbalancer pool and members. An update will be triggered on the root Stack if an unhealthy member is detected in the loadbalancer pool. --pool-id <LOADBALANCER_POOL_NAME_OR_ID> Name or id of the loadbalancer pool for which the health of each member will be evaluated. --autoscaling-group-id <AUTOSCALING_GROUP_NAME_OR_ID> Id of the heat autoscaling group that contains the loadbalancer members. Unhealthy members will be marked as such before an update is triggered on the root stack.
[ "openstack alarm create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name <NAME> -t <TYPE> [--project-id <PROJECT_ID>] [--user-id <USER_ID>] [--description <DESCRIPTION>] [--state <STATE>] [--severity <SEVERITY>] [--enabled {True|False}] [--alarm-action <Webhook URL>] [--ok-action <Webhook URL>] [--insufficient-data-action <Webhook URL>] [--time-constraint <Time Constraint>] [--repeat-actions {True|False}] [--query <QUERY>] [--comparison-operator <OPERATOR>] [--evaluation-periods <EVAL_PERIODS>] [--threshold <THRESHOLD>] [--event-type <EVENT_TYPE>] [-m <METER NAME>] [--period <PERIOD>] [--statistic <STATISTIC>] [--granularity <GRANULARITY>] [--aggregation-method <AGGR_METHOD>] [--metric <METRIC>] [--resource-type <RESOURCE_TYPE>] [--resource-id <RESOURCE_ID>] [--composite-rule <COMPOSITE_RULE>] [--stack-id <STACK_NAME_OR_ID>] [--pool-id <LOADBALANCER_POOL_NAME_OR_ID>] [--autoscaling-group-id <AUTOSCALING_GROUP_NAME_OR_ID>]", "openstack alarm delete [-h] [--name <NAME>] [<ALARM ID or NAME>]", "openstack alarm-history search [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--query QUERY]", "openstack alarm-history show [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT_KEY:SORT_DIR>] <alarm-id>", "openstack alarm list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--query QUERY | --filter <KEY1=VALUE1;KEY2=VALUE2...>] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT_KEY:SORT_DIR>]", "openstack alarm quota set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--alarm ALARM] project", "openstack alarm quota show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--project PROJECT]", "openstack alarm show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <NAME>] [<ALARM ID or NAME>]", "openstack alarm state get [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <NAME>] [<ALARM ID or NAME>]", "openstack alarm state set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <NAME>] --state <STATE> [<ALARM ID or NAME>]", "openstack alarm update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <NAME>] [-t <TYPE>] [--project-id <PROJECT_ID>] [--user-id <USER_ID>] [--description <DESCRIPTION>] [--state <STATE>] [--severity <SEVERITY>] [--enabled {True|False}] [--alarm-action <Webhook URL>] [--ok-action <Webhook URL>] [--insufficient-data-action <Webhook URL>] [--time-constraint <Time Constraint>] [--repeat-actions {True|False}] [--query <QUERY>] [--comparison-operator <OPERATOR>] [--evaluation-periods <EVAL_PERIODS>] [--threshold <THRESHOLD>] [--event-type <EVENT_TYPE>] [-m <METER NAME>] [--period <PERIOD>] [--statistic <STATISTIC>] [--granularity <GRANULARITY>] [--aggregation-method <AGGR_METHOD>] [--metric <METRIC>] [--resource-type <RESOURCE_TYPE>] [--resource-id <RESOURCE_ID>] [--composite-rule <COMPOSITE_RULE>] [--stack-id <STACK_NAME_OR_ID>] [--pool-id <LOADBALANCER_POOL_NAME_OR_ID>] [--autoscaling-group-id <AUTOSCALING_GROUP_NAME_OR_ID>] [<ALARM ID or NAME>]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/alarm
6.4. Using sets in nftables commands
6.4. Using sets in nftables commands The nftables framework natively supports sets. You can use sets, for example, if a rule should match multiple IP addresses, port numbers, interfaces, or any other match criteria. 6.4.1. Using anonymous sets in nftables An anonymous set contain comma-separated values enclosed in curly brackets, such as { 22, 80, 443 } , that you use directly in a rule. You can also use anonymous sets also for IP addresses or any other match criteria. The drawback of anonymous sets is that if you want to change the set, you must replace the rule. For a dynamic solution, use named sets as described in Section 6.4.2, "Using named sets in nftables" . Prerequisites The example_chain chain and the example_table table in the inet family exists. Procedure 6.13. Using anonymous sets in nftables For example, to add a rule to example_chain in example_table that allows incoming traffic to port 22 , 80 , and 443 : Optionally, display all chains and their rules in example_table : 6.4.2. Using named sets in nftables The nftables framework supports mutable named sets. A named set is a list or range of elements that you can use in multiple rules within a table. Another benefit over anonymous sets is that you can update a named set without replacing the rules that use the set. When you create a named set, you must specify the type of elements the set contains. You can set the following types: ipv4_addr for a set that contains IPv4 addresses or ranges, such as 192.0.2.1 or 192.0.2.0/24 . ipv6_addr for a set that contains IPv6 addresses or ranges, such as 2001:db8:1::1 or 2001:db8:1::1/64 . ether_addr for a set that contains a list of media access control ( MAC ) addresses, such as 52:54:00:6b:66:42 . inet_proto for a set that contains a list of Internet protocol types, such as tcp . inet_service for a set that contains a list of Internet services, such as ssh . mark for a set that contains a list of packet marks. Packet marks can be any positive 32-bit integer value ( 0 to 2147483647 ). Prerequisites The example_chain chain and the example_table table exists. Procedure 6.14. Using named sets in nftables Create an empty set. The following examples create a set for IPv4 addresses: To create a set that can store multiple individual IPv4 addresses: To create a set that can store IPv4 address ranges: Important To avoid that the shell interprets the semicolons as the end of the command, you must escape the semicolons with a backslash. Optionally, create rules that use the set. For example, the following command adds a rule to the example_chain in the example_table that will drop all packets from IPv4 addresses in example_set . Because example_set is still empty, the rule has currently no effect. Add IPv4 addresses to example_set : If you create a set that stores individual IPv4 addresses, enter: If you create a set that stores IPv4 ranges, enter: When you specify an IP address range, you can alternatively use the Classless Inter-Domain Routing ( CIDR ) notation, such as 192.0.2.0/24 in the above example. 6.4.3. Related information For further details about sets, see the Sets section in the nft(8) man page.
[ "nft add rule inet example_table example_chain tcp dport { 22, 80, 443 } accept", "nft list table inet example_table table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; tcp dport { ssh, http, https } accept } }", "nft add set inet example_table example_set { type ipv4_addr \\; }", "nft add set inet example_table example_set { type ipv4_addr \\; flags interval \\; }", "nft add rule inet example_table example_chain ip saddr @example_set drop", "nft add element inet example_table example_set { 192.0.2.1, 192.0.2.2 }", "nft add element inet example_table example_set { 192.0.2.0-192.0.2.255 }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-Using_sets_in_nftables_commands
Appendix A. Ceph block device configuration reference
Appendix A. Ceph block device configuration reference As a storage administrator, you can fine tune the behavior of Ceph block devices through the various options that are available. You can use this reference for viewing such things as the default Ceph block device options, and Ceph block device caching options. Prerequisites A running Red Hat Ceph Storage cluster. A.1. Block device default options It is possible to override the default settings for creating an image. Ceph will create images with format 2 and no striping. rbd_default_format Description The default format ( 2 ) if no other format is specified. Format 1 is the original format for a new image, which is compatible with all versions of librbd and the kernel module, but does not support newer features like cloning. Format 2 is supported by librbd and the kernel module since version 3.11 (except for striping). Format 2 adds support for cloning and is more easily extensible to allow more features in the future. Type Integer Default 2 rbd_default_order Description The default order if no other order is specified. Type Integer Default 22 rbd_default_stripe_count Description The default stripe count if no other stripe count is specified. Changing the default value requires striping v2 feature. Type 64-bit Unsigned Integer Default 0 rbd_default_stripe_unit Description The default stripe unit if no other stripe unit is specified. Changing the unit from 0 (that is, the object size) requires the striping v2 feature. Type 64-bit Unsigned Integer Default 0 rbd_default_features Description The default features enabled when creating an block device image. This setting only applies to format 2 images. The settings are: 1: Layering support. Layering enables you to use cloning. 2: Striping v2 support. Striping spreads data across multiple objects. Striping helps with parallelism for sequential read/write workloads. 4: Exclusive locking support. When enabled, it requires a client to get a lock on an object before making a write. 8: Object map support. Block devices are thin-provisioned- meaning, they only store data that actually exists. Object map support helps track which objects actually exist (have data stored on a drive). Enabling object map support speeds up I/O operations for cloning, or importing and exporting a sparsely populated image. 16: Fast-diff support. Fast-diff support depends on object map support and exclusive lock support. It adds another property to the object map, which makes it much faster to generate diffs between snapshots of an image, and the actual data usage of a snapshot much faster. 32: Deep-flatten support. Deep-flatten makes rbd flatten work on all the snapshots of an image, in addition to the image itself. Without it, snapshots of an image will still rely on the parent, so the parent will not be delete-able until the snapshots are deleted. Deep-flatten makes a parent independent of its clones, even if they have snapshots. 64: Journaling support. Journaling records all modifications to an image in the order they occur. This ensures that a crash-consistent mirror of the remote image is available locally The enabled features are the sum of the numeric settings. Type Integer Default 61 - layering, exclusive-lock, object-map, fast-diff, and deep-flatten are enabled Important The current default setting is not compatible with the RBD kernel driver nor older RBD clients. rbd_default_map_options Description Most of the options are useful mainly for debugging and benchmarking. See man rbd under Map Options for details. Type String Default "" A.2. Block device general options rbd_op_threads Description The number of block device operation threads. Type Integer Default 1 Warning Do not change the default value of rbd_op_threads because setting it to a number higher than 1 might cause data corruption. rbd_op_thread_timeout Description The timeout (in seconds) for block device operation threads. Type Integer Default 60 rbd_non_blocking_aio Description If true , Ceph will process block device asynchronous I/O operations from a worker thread to prevent blocking. Type Boolean Default true rbd_concurrent_management_ops Description The maximum number of concurrent management operations in flight (for example, deleting or resizing an image). Type Integer Default 10 rbd_request_timed_out_seconds Description The number of seconds before a maintenance request times out. Type Integer Default 30 rbd_clone_copy_on_read Description When set to true , copy-on-read cloning is enabled. Type Boolean Default false rbd_enable_alloc_hint Description If true , allocation hinting is enabled, and the block device will issue a hint to the OSD back end to indicate the expected size object. Type Boolean Default true rbd_skip_partial_discard Description If true , the block device will skip zeroing a range when trying to discard a range inside an object. Type Boolean Default true rbd_tracing Description Set this option to true to enable the Linux Trace Toolkit Generation User Space Tracer (LTTng-UST) tracepoints. See Tracing RADOS Block Device (RBD) Workloads with the RBD Replay Feature for details. Type Boolean Default false rbd_validate_pool Description Set this option to true to validate empty pools for RBD compatibility. Type Boolean Default true rbd_validate_names Description Set this option to true to validate image specifications. Type Boolean Default true A.3. Block device caching options The user space implementation of the Ceph block device, that is, librbd , cannot take advantage of the Linux page cache, so it includes its own in-memory caching, called RBD caching . Ceph block device caching behaves just like well-behaved hard disk caching. When the operating system sends a barrier or a flush request, all dirty data is written to the Ceph OSDs. This means that using write-back caching is just as safe as using a well-behaved physical hard disk with a virtual machine that properly sends flushes, that is, Linux kernel version 2.6.32 or higher. The cache uses a Least Recently Used (LRU) algorithm, and in write-back mode it can coalesce contiguous requests for better throughput. Ceph block devices support write-back caching. To enable write-back caching, set rbd_cache = true to the [client] section of the Ceph configuration file. By default, librbd does not perform any caching. Writes and reads go directly to the storage cluster, and writes return only when the data is on disk on all replicas. With caching enabled, writes return immediately, unless there are more than rbd_cache_max_dirty unflushed bytes. In this case, the write triggers write-back and blocks until enough bytes are flushed. Ceph block devices support write-through caching. You can set the size of the cache, and you can set targets and limits to switch from write-back caching to write-through caching. To enable write-through mode, set rbd_cache_max_dirty to 0. This means writes return only when the data is on disk on all replicas, but reads may come from the cache. The cache is in memory on the client, and each Ceph block device image has its own. Since the cache is local to the client, there is no coherency if there are others accessing the image. Running other file systems, such as GFS or OCFS, on top of Ceph block devices will not work with caching enabled. The Ceph configuration settings for Ceph block devices must be set in the [client] section of the Ceph configuration file, by default, /etc/ceph/ceph.conf . The settings include: rbd_cache Description Enable caching for RADOS Block Device (RBD). Type Boolean Required No Default true rbd_cache_size Description The RBD cache size in bytes. Type 64-bit Integer Required No Default 32 MiB rbd_cache_max_dirty Description The dirty limit in bytes at which the cache triggers write-back. If 0 , uses write-through caching. Type 64-bit Integer Required No Constraint Must be less than rbd cache size . Default 24 MiB rbd_cache_target_dirty Description The dirty target before the cache begins writing data to the data storage. Does not block writes to the cache. Type 64-bit Integer Required No Constraint Must be less than rbd cache max dirty . Default 16 MiB rbd_cache_max_dirty_age Description The number of seconds dirty data is in the cache before writeback starts. Type Float Required No Default 1.0 rbd_cache_max_dirty_object Description The dirty limit for objects - set to 0 for auto calculate from rbd_cache_size . Type Integer Default 0 rbd_cache_block_writes_upfront Description If true , it will block writes to the cache before the aio_write call completes. If false , it will block before the aio_completion is called. Type Boolean Default false rbd_cache_writethrough_until_flush Description Start out in write-through mode, and switch to write-back after the first flush request is received. Enabling this is a conservative but safe setting in case VMs running on rbd are too old to send flushes, like the virtio driver in Linux before 2.6.32. Type Boolean Required No Default true A.4. Block device parent and child read options rbd_balance_snap_reads Description Ceph typically reads objects from the primary OSD. Since reads are immutable, you may enable this feature to balance snap reads between the primary OSD and the replicas. Type Boolean Default false rbd_localize_snap_reads Description Whereas rbd_balance_snap_reads will randomize the replica for reading a snapshot. If you enable rbd_localize_snap_reads , the block device will look to the CRUSH map to find the closest or local OSD for reading the snapshot. Type Boolean Default false rbd_balance_parent_reads Description Ceph typically reads objects from the primary OSD. Since reads are immutable, you may enable this feature to balance parent reads between the primary OSD and the replicas. Type Boolean Default false rbd_localize_parent_reads Description Whereas rbd_balance_parent_reads will randomize the replica for reading a parent. If you enable rbd_localize_parent_reads , the block device will look to the CRUSH map to find the closest or local OSD for reading the parent. Type Boolean Default true A.5. Block device read ahead options RBD supports read-ahead/prefetching to optimize small, sequential reads. This should normally be handled by the guest OS in the case of a VM, but boot loaders may not issue efficient reads. Read-ahead is automatically disabled if caching is disabled. rbd_readahead_trigger_requests Description Number of sequential read requests necessary to trigger read-ahead. Type Integer Required No Default 10 rbd_readahead_max_bytes Description Maximum size of a read-ahead request. If zero, read-ahead is disabled. Type 64-bit Integer Required No Default 512 KiB rbd_readahead_disable_after_bytes Description After this many bytes have been read from an RBD image, read-ahead is disabled for that image until it is closed. This allows the guest OS to take over read-ahead once it is booted. If zero, read-ahead stays enabled. Type 64-bit Integer Required No Default 50 MiB A.6. Block device blocklist options rbd_blocklist_on_break_lock Description Whether to blocklist clients whose lock was broken. Type Boolean Default true rbd_blocklist_expire_seconds Description The number of seconds to blocklist - set to 0 for OSD default. Type Integer Default 0 A.7. Block device journal options rbd_journal_order Description The number of bits to shift to compute the journal object maximum size. The value is between 12 and 64 . Type 32-bit Unsigned Integer Default 24 rbd_journal_splay_width Description The number of active journal objects. Type 32-bit Unsigned Integer Default 4 rbd_journal_commit_age Description The commit time interval in seconds. Type Double Precision Floating Point Number Default 5 rbd_journal_object_flush_interval Description The maximum number of pending commits per a journal object. Type Integer Default 0 rbd_journal_object_flush_bytes Description The maximum number of pending bytes per a journal object. Type Integer Default 0 rbd_journal_object_flush_age Description The maximum time interval in seconds for pending commits. Type Double Precision Floating Point Number Default 0 rbd_journal_pool Description Specifies a pool for journal objects. Type String Default "" A.8. Block device configuration override options Block device configuration override options for global and pool levels. Global level Available keys rbd_qos_bps_burst Description The desired burst limit of IO bytes. Type Integer Default 0 rbd_qos_bps_limit Description The desired limit of IO bytes per second. Type Integer Default 0 rbd_qos_iops_burst Description The desired burst limit of IO operations. Type Integer Default 0 rbd_qos_iops_limit Description The desired limit of IO operations per second. Type Integer Default 0 rbd_qos_read_bps_burst Description The desired burst limit of read bytes. Type Integer Default 0 rbd_qos_read_bps_limit Description The desired limit of read bytes per second. Type Integer Default 0 rbd_qos_read_iops_burst Description The desired burst limit of read operations. Type Integer Default 0 rbd_qos_read_iops_limit Description The desired limit of read operations per second. Type Integer Default 0 rbd_qos_write_bps_burst Description The desired burst limit of write bytes. Type Integer Default 0 rbd_qos_write_bps_limit Description The desired limit of write bytes per second. Type Integer Default 0 rbd_qos_write_iops_burst Description The desired burst limit of write operations. Type Integer Default 0 rbd_qos_write_iops_limit Description The desired burst limit of write operations per second. Type Integer Default 0 The above keys can be used for the following: rbd config global set CONFIG_ENTITY KEY VALUE Description Set a global level configuration override. rbd config global get CONFIG_ENTITY KEY Description Get a global level configuration override. rbd config global list CONFIG_ENTITY Description List the global level configuration overrides. rbd config global remove CONFIG_ENTITY KEY Description Remove a global level configuration override. Pool level rbd config pool set POOL_NAME KEY VALUE Description Set a pool level configuration override. rbd config pool get POOL_NAME KEY Description Get a pool level configuration override. rbd config pool list POOL_NAME Description List the pool level configuration overrides. rbd config pool remove POOL_NAME KEY Description Remove a pool level configuration override. Note CONFIG_ENTITY is global, client or client id. KEY is the config key. VALUE is the config value. POOL_NAME is the name of the pool. A.9. Block device input and output options General input and output options for Red Hat Ceph Storage. rbd_compression_hint Description Hint to send to the OSDs on write operations. If set to compressible and the OSD bluestore_compression_mode setting is passive , the OSD attempts to compress data. If set to incompressible and the OSD bluestore_compression_mode setting is aggressive , the OSD will not attempt to compress data. Type Enum Required No Default none Values none , compressible , incompressible rbd_read_from_replica_policy Description Policy for determining which OSD receives read operations. If set to default , each PG's primary OSD will always be used for read operations. If set to balance , read operations will be sent to a randomly selected OSD within the replica set. If set to localize , read operations will be sent to the closest OSD as determined by the CRUSH map and the crush_location configuration option, where the crush_location is denoted using key=value . The key aligns with the CRUSH map keys. Note This feature requires the storage cluster to be configured with a minimum compatible OSD release of the latest version of Red Hat Ceph Storage. Type Enum Required No Default default Values default , balance , localize
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/block_device_guide/ceph-block-device-configuration-reference
Chapter 10. Configuring a deployment
Chapter 10. Configuring a deployment Configure and manage a Streams for Apache Kafka deployment to your precise needs using Streams for Apache Kafka custom resources. Streams for Apache Kafka provides example custom resources with each release, allowing you to configure and create instances of supported Kafka components. Fine-tune your deployment by configuring custom resources to include additional features according to your specific requirements. Use custom resources to configure and create instances of the following components: Kafka clusters Kafka Connect clusters Kafka MirrorMaker Kafka Bridge Cruise Control You can use configuration to manage your instances or modify your deployment to introduce additional features. New features are sometimes introduced through feature gates, which are controlled through operator configuration. The Streams for Apache Kafka Custom Resource API Reference describes the properties you can use in your configuration. Important Kafka configuration options Through configuration of the Kafka resource, you can introduce the following: Data storage Rack awareness Listeners for authenticated client access to the Kafka cluster Topic Operator for managing Kafka topics User Operator for managing Kafka users (clients) Cruise Control for cluster rebalancing Kafka Exporter for collecting lag metrics Use KafkaNodePool resources to configure distinct groups of nodes within a Kafka cluster. Common configuration Common configuration is configured independently for each component, such as the following: Bootstrap servers for host/port connection to a Kafka cluster Metrics configuration Healthchecks and liveness probes Resource limits and requests (CPU/Memory) Logging frequency JVM options for maximum and minimum memory allocation Adding additional volumes and volume mounts Config maps to centralize configuration For specific areas of configuration, namely metrics, logging, and external configuration for Kafka Connect connectors, you can also use ConfigMap resources. By using a ConfigMap resource to incorporate configuration, you centralize maintenance. You can also use configuration providers to load configuration from external sources, which we recommend for supplying the credentials for Kafka Connect connector configuration. TLS certificate management When deploying Kafka, the Cluster Operator automatically sets up and renews TLS certificates to enable encryption and authentication within your cluster. If required, you can manually renew the cluster and clients CA certificates before their renewal period starts. You can also replace the keys used by the cluster and clients CA certificates. For more information, see Renewing CA certificates manually and Replacing private keys . Applying changes to a custom resource configuration file You add configuration to a custom resource using spec properties. After adding the configuration, you can use oc to apply the changes to a custom resource configuration file: Applying changes to a resource configuration file oc apply -f <kafka_configuration_file> Note Labels applied to a custom resource are also applied to the OpenShift resources making up its cluster. This provides a convenient mechanism for resources to be labeled as required. 10.1. Using example configuration files Further enhance your deployment by incorporating additional supported configuration. Example configuration files are included in the Streams for Apache Kafka deployment files . The example files include only the essential properties and values for custom resources by default. You can download and apply the examples using the oc command-line tool. The examples can serve as a starting point when building your own Kafka component configuration for deployment. Note If you installed Streams for Apache Kafka using the Operator, you can still download the example files and use them to upload configuration. The release artifacts include an examples directory that contains the configuration examples. Example configuration files provided with Streams for Apache Kafka 1 KafkaUser custom resource configuration, which is managed by the User Operator. 2 KafkaTopic custom resource configuration, which is managed by Topic Operator. 3 Authentication and authorization configuration for Kafka components. Includes example configuration for TLS and SCRAM-SHA-512 authentication. The Red Hat build of Keycloak example includes Kafka custom resource configuration and a Red Hat build of Keycloak realm specification. You can use the example to try Red Hat build of Keycloak authorization services. There is also an example with enabled oauth authentication and keycloak authorization metrics. 4 KafkaMirrorMaker and KafkaMirrorMaker2 custom resource configurations for a deployment of MirrorMaker. Includes example configuration for replication policy and synchronization frequency. 5 Metrics configuration , including Prometheus installation and Grafana dashboard files. 6 Kafka and KafkaNodePool custom resource configurations for a deployment of Kafka clusters that use ZooKeeper mode. Includes example configuration for an ephemeral or persistent single or multi-node deployment. 7 Kafka and KafkaNodePool configurations for a deployment of Kafka clusters that use KRaft (Kafka Raft metadata) mode. 8 Kafka and KafkaRebalance configurations for deploying and using Cruise Control to manage clusters. Kafka configuration examples enable auto-rebalancing on scaling events and set default optimization goals. KakaRebalance configuration examples set proposal-specific optimization goals and generate optimization proposals in various supported modes. 9 KafkaConnect and KafkaConnector custom resource configuration for a deployment of Kafka Connect. Includes example configurations for a single or multi-node deployment. 10 KafkaBridge custom resource configuration for a deployment of Kafka Bridge. 10.2. Configuring Kafka in KRaft mode Update the spec properties of the Kafka custom resource to configure your deployment of Kafka in KRaft mode. As well as configuring Kafka, you can add configuration for Streams for Apache Kafka operators. The KRaft metadata version ( .spec.kafka.metadataVersion ) must be a version supported by the Kafka version ( spec.kafka.version ). If the metadata version is not set in the configuration, the Cluster Operator updates the version to the default for the Kafka version used. Note The oldest supported metadata version is 3.3. Using a metadata version that is older than the Kafka version might cause some features to be disabled. Kafka clusters operating in KRaft mode also use node pools. The following must be specified in the node pool configuration : Roles assigned to each node within the Kafka cluster Number of replica nodes used Storage specification for the nodes Other optional properties may also be set in node pools. For a deeper understanding of the Kafka cluster configuration options, refer to the Streams for Apache Kafka Custom Resource API Reference . Example Kafka custom resource configuration # Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster # Deployment specifications spec: kafka: # Listener configuration (required) listeners: 1 - name: plain 2 port: 9092 3 type: internal 4 tls: false 5 configuration: useServiceDnsDomain: true 6 - name: tls port: 9093 type: internal tls: true authentication: 7 type: tls - name: external1 8 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: 9 secretName: my-secret certificate: my-certificate.crt key: my-key.key # Kafka version (recommended) version: 3.9.0 10 # KRaft metadata version (recommended) metadataVersion: 3.9 11 # Kafka configuration (recommended) config: 12 auto.create.topics.enable: "false" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 # Resources requests and limits (recommended) resources: 13 requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12" # Logging configuration (optional) logging: 14 type: inline loggers: kafka.root.logger.level: INFO # Readiness probe (optional) readinessProbe: 15 initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # JVM options (optional) jvmOptions: 16 -Xms: 8192m -Xmx: 8192m # Custom image (optional) image: my-org/my-image:latest 17 # Authorization (optional) authorization: 18 type: simple # Rack awareness (optional) rack: 19 topologyKey: topology.kubernetes.io/zone # Metrics configuration (optional) metricsConfig: 20 type: jmxPrometheusExporter valueFrom: configMapKeyRef: 21 name: my-config-map key: my-key # Entity Operator (recommended) entityOperator: 22 topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 # Resources requests and limits (recommended) resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" # Logging configuration (optional) logging: 23 type: inline loggers: rootLogger.level: INFO userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 # Resources requests and limits (recommended) resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" # Logging configuration (optional) logging: 24 type: inline loggers: rootLogger.level: INFO # Kafka Exporter (optional) kafkaExporter: 25 # ... # Cruise Control (optional) cruiseControl: 26 # ... 1 Listeners configure how clients connect to the Kafka cluster via bootstrap addresses. Listeners are configured as internal or external listeners for connection from inside or outside the OpenShift cluster. 2 Name to identify the listener. Must be unique within the Kafka cluster. 3 Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. 4 Listener type specified as internal or cluster-ip (to expose Kafka using per-broker ClusterIP services), or for external listeners, as route (OpenShift only), loadbalancer , nodeport or ingress (Kubernetes only). 5 Enables or disables TLS encryption for each listener. For route and ingress type listeners, TLS encryption must always be enabled by setting it to true . 6 Defines whether the fully-qualified DNS names including the cluster service suffix (usually .cluster.local ) are assigned. 7 Listener authentication mechanism specified as mTLS, SCRAM-SHA-512, or token-based OAuth 2.0. 8 External listener configuration specifies how the Kafka cluster is exposed outside OpenShift, such as through a route , loadbalancer or nodeport . 9 Optional configuration for a Kafka listener certificate managed by an external CA (certificate authority). The brokerCertChainAndKey specifies a Secret that contains a server certificate and a private key. You can configure Kafka listener certificates on any listener with enabled TLS encryption. 10 Kafka version, which can be changed to a supported version by following the upgrade procedure. 11 Kafka metadata version, which can be changed to a supported version by following the upgrade procedure. 12 Broker configuration. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by Streams for Apache Kafka. 13 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 14 Kafka loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties key in the ConfigMap. For the Kafka kafka.root.logger.level logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 15 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 16 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka. 17 ADVANCED OPTION: Container image configuration, which is recommended only in special situations. 18 Authorization enables simple, OAUTH 2.0, or OPA authorization on the Kafka broker. Simple authorization uses the AclAuthorizer and StandardAuthorizer Kafka plugins. 19 Rack awareness configuration to spread replicas across different racks, data centers, or availability zones. The topologyKey must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standard topology.kubernetes.io/zone label. 20 Prometheus metrics enabled. In this example, metrics are configured for the Prometheus JMX Exporter (the default metrics exporter). 21 Rules for exporting metrics in Prometheus format to a Grafana dashboard through the Prometheus JMX Exporter, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key . 22 Entity Operator configuration, which specifies the configuration for the Topic Operator and User Operator. 23 Specified Topic Operator loggers and log levels. This example uses inline logging. 24 Specified User Operator loggers and log levels. 25 Kafka Exporter configuration. Kafka Exporter is an optional component for extracting metrics data from Kafka brokers, in particular consumer lag data. For Kafka Exporter to be able to work properly, consumer groups need to be in use. 26 Optional configuration for Cruise Control, which is used to rebalance the Kafka cluster. 10.2.1. Setting throughput and storage limits on brokers This procedure describes how to set throughput and storage limits on brokers in your Kafka cluster. Enable a quota plugin and configure limits using quotas properties in the Kafka resource. There are two types of quota plugins available: The strimzi type enables the Strimzi Quotas plugin. The kafka type enables the built-in Kafka plugin. Only one quota plugin can be enabled at a time. The built-in kafka plugin is enabled by default. Enabling the strimzi plugin automatically disables the built-in plugin. strimzi plugin The strimzi plugin provides storage utilization quotas and dynamic distribution of throughput limits. Storage quotas throttle Kafka producers based on disk storage utilization. Limits can be specified in bytes ( minAvailableBytesPerVolume ) or percentage ( minAvailableRatioPerVolume ) of available disk space, applying to each disk individually. When any broker in the cluster exceeds the configured disk threshold, clients are throttled to prevent disks from filling up too quickly and exceeding capacity. A total throughput limit is distributed dynamically across all clients. For example, if you set a 40 MBps producer byte-rate threshold, the distribution across two producers is not static. If one producer is using 10 MBps, the other can use up to 30 MBps. Specific users (clients) can be excluded from the restrictions. Note With the strimzi plugin, you see only aggregated quota metrics, not per-client metrics. kafka plugin The kafka plugin applies throughput limits on a per-user, per-broker basis and includes additional CPU and operation rate limits. Limits are applied per user and per broker. For example, setting a 20 MBps producer byte-rate threshold limits each user to 20 MBps on a per-broker basis across all producer connections for that user. There is no total throughput limit as there is in the strimzi plugin. Limits can be overridden by user-specific quota configurations . CPU utilization limits for each client can be set as a percentage of the network threads and I/O threads on a per-broker basis. The number of concurrent partition creation and deletion operations (mutations) allowed per second can be set on a per-broker basis. When using the default Kafka quotas plugin, the default quotas (if set) are applied to all users. This includes internal users such as the Topic Operator and Cruise Control, which may impact their operations. To avoid unduly limiting internal users, consider tuning the quotas effectively. For example, a quota automatically applied to the Topic Operator by the Kafka quotas plugin could constrain the controller mutation rate, potentially throttling topic creation or deletion operations. Therefore, it is important to understand the minimal quotas required by the Topic Operator to function correctly and explicitly set appropriate quotas to avoid such issues. Monitoring relevant controller and broker metrics can help track and optimize the rate of operations on topics. Cruise Control and its metrics reporter also require sufficient produce and fetch rates to conduct rebalances, depending on the scale and configuration of the Kafka cluster. To prevent issues for Cruise Control, you might start with a rate of at least 1 KB/s for its producers and consumers in small clusters, such as three brokers with moderate traffic, and adjust as needed for larger or more active clusters. Prerequisites The Cluster Operator that manages the Kafka cluster is running. Procedure Add the plugin configuration to the quotas section of the Kafka resource. Example strimzi plugin configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... quotas: type: strimzi producerByteRate: 1000000 1 consumerByteRate: 1000000 2 minAvailableBytesPerVolume: 500000000000 3 excludedPrincipals: 4 - my-user 1 Sets a producer byte-rate threshold of 1 MBps. 2 Sets a consumer byte-rate threshold of 1 MBps. 3 Sets an available bytes limit for storage of 500 GB. 4 Excludes my-user from the restrictions. minAvailableBytesPerVolume and minAvailableRatioPerVolume are mutually exclusive. Only configure one of these parameters. Example kafka plugin configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... quotas: type: kafka producerByteRate: 1000000 consumerByteRate: 1000000 requestPercentage: 55 1 controllerMutationRate: 50 2 1 Sets the CPU utilization limit to 55%. 2 Sets the controller mutation rate to 50 operations per second. Apply the changes to the Kafka configuration. Note Additional options can be configured in the spec.kafka.config section. The full list of supported options can be found in the plugin documentation . Additional resources QuotasPluginStrimzi schema reference QuotasPluginKafka schema reference 10.2.2. Deleting Kafka nodes using annotations This procedure describes how to delete an existing Kafka node by using an OpenShift annotation. Deleting a Kafka node consists of deleting both the Pod on which the Kafka broker is running and the related PersistentVolumeClaim (if the cluster was deployed with persistent storage). After deletion, the Pod and its related PersistentVolumeClaim are recreated automatically. Warning Deleting a PersistentVolumeClaim can cause permanent data loss and the availability of your cluster cannot be guaranteed. The following procedure should only be performed if you have encountered storage issues. Prerequisites A running Cluster Operator Procedure Find the name of the Pod that you want to delete. Kafka broker pods are named <cluster_name>-kafka-<index_number> , where <index_number> starts at zero and ends at the total number of replicas minus one. For example, my-cluster-kafka-0 . Use oc annotate to annotate the Pod resource in OpenShift: oc annotate pod <cluster_name>-kafka-<index_number> strimzi.io/delete-pod-and-pvc="true" Wait for the reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated. 10.3. Configuring Kafka with ZooKeeper Update the spec properties of the Kafka custom resource to configure your deployment of Kafka with ZooKeeper. As well as configuring Kafka, you can add configuration for ZooKeeper and the Streams for Apache Kafka operators. The configuration options for Kafka and the Streams for Apache Kafka operators are the same as when using Kafka in KRaft mode. For descriptions of the properties, see Section 10.2, "Configuring Kafka in KRaft mode" . The inter-broker protocol version ( inter.broker.protocol.version ) must be a version supported by the Kafka version ( spec.kafka.version ). If the inter-broker protocol version is not set in the configuration, the Cluster Operator updates the version to the default for the Kafka version used. If you are also using node pools, the following must be specified in the node pool configuration : Roles assigned to each node within the Kafka cluster Number of replica nodes used Storage specification for the nodes If set in the node pool configuration, the equivalent configuration in the Kafka resource, such as spec.kafka.replicas , is not required. Other optional properties may also be set in node pools. For a deeper understanding of the ZooKeeper cluster configuration options, refer to the Streams for Apache Kafka Custom Resource API Reference . Example Kafka custom resource configuration when using ZooKeeper # Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster # Deployment specifications spec: # Kafka configuration (required) kafka: # Replicas (required) replicas: 3 # Listener configuration (required) listeners: - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-certificate.crt key: my-key.key # Storage configuration (required) storage: type: persistent-claim size: 10000Gi # Kafka version (recommended) version: 3.9.0 # Kafka configuration (recommended) config: auto.create.topics.enable: "false" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: "3.9" # Resources requests and limits (recommended) resources: requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12" # Logging configuration (optional) logging: type: inline loggers: kafka.root.logger.level: INFO # Readiness probe (optional) readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # JVM options (optional) jvmOptions: -Xms: 8192m -Xmx: 8192m # Custom image (optional) image: my-org/my-image:latest # Authorization (optional) authorization: type: simple # Rack awareness (optional) rack: topologyKey: topology.kubernetes.io/zone # Metrics configuration (optional) metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key # ... # ZooKeeper configuration (required) zookeeper: 1 # Replicas (required) replicas: 3 2 # Storage configuration (required) storage: 3 type: persistent-claim size: 1000Gi # Resources requests and limits (recommended) resources: 4 requests: memory: 8Gi cpu: "2" limits: memory: 8Gi cpu: "2" # Logging configuration (optional) logging: 5 type: inline loggers: zookeeper.root.logger: INFO # JVM options (optional) jvmOptions: 6 -Xms: 4096m -Xmx: 4096m # Metrics configuration (optional) metricsConfig: 7 type: jmxPrometheusExporter valueFrom: configMapKeyRef: 8 name: my-config-map key: my-key # ... # Entity operator (recommended) entityOperator: topicOperator: # Resources requests and limits (recommended) resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" # Logging configuration (optional) logging: type: inline loggers: rootLogger.level: INFO watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 userOperator: # Resources requests and limits (recommended) resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" # Logging configuration (optional) logging: type: inline loggers: rootLogger.level: INFO watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 # Kafka Exporter (optional) kafkaExporter: # ... # Cruise Control (optional) cruiseControl: # ... 1 ZooKeeper-specific configuration contains properties similar to the Kafka configuration. 2 The number of ZooKeeper nodes. ZooKeeper clusters or ensembles usually run with an odd number of nodes, typically three, five, or seven. The majority of nodes must be available in order to maintain an effective quorum. If the ZooKeeper cluster loses its quorum, it will stop responding to clients and the Kafka brokers will stop working. Having a stable and highly available ZooKeeper cluster is crucial for Streams for Apache Kafka. 3 Storage size for persistent volumes may be increased and additional volumes may be added to JBOD storage. 4 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 5 ZooKeeper loggers and log levels. 6 JVM configuration options to optimize performance for the Virtual Machine (VM) running ZooKeeper. 7 Prometheus metrics enabled. In this example, metrics are configured for the Prometheus JMX Exporter (the default metrics exporter). 8 Rules for exporting metrics in Prometheus format to a Grafana dashboard through the Prometheus JMX Exporter, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key . 10.3.1. Default ZooKeeper configuration values When deploying ZooKeeper with Streams for Apache Kafka, some of the default configuration set by Streams for Apache Kafka differs from the standard ZooKeeper defaults. This is because Streams for Apache Kafka sets a number of ZooKeeper properties with values that are optimized for running ZooKeeper within an OpenShift environment. The default configuration for key ZooKeeper properties in Streams for Apache Kafka is as follows: Table 10.1. Default ZooKeeper Properties in Streams for Apache Kafka Property Default value Description tickTime 2000 The length of a single tick in milliseconds, which determines the length of a session timeout. initLimit 5 The maximum number of ticks that a follower is allowed to fall behind the leader in a ZooKeeper cluster. syncLimit 2 The maximum number of ticks that a follower is allowed to be out of sync with the leader in a ZooKeeper cluster. autopurge.purgeInterval 1 Enables the autopurge feature and sets the time interval in hours for purging the server-side ZooKeeper transaction log. admin.enableServer false Flag to disable the ZooKeeper admin server. The admin server is not used by Streams for Apache Kafka. Important Modifying these default values as zookeeper.config in the Kafka custom resource may impact the behavior and performance of your ZooKeeper cluster. 10.3.2. Deleting ZooKeeper nodes using annotations This procedure describes how to delete an existing ZooKeeper node by using an OpenShift annotation. Deleting a ZooKeeper node consists of deleting both the Pod on which ZooKeeper is running and the related PersistentVolumeClaim (if the cluster was deployed with persistent storage). After deletion, the Pod and its related PersistentVolumeClaim are recreated automatically. Warning Deleting a PersistentVolumeClaim can cause permanent data loss and the availability of your cluster cannot be guaranteed. The following procedure should only be performed if you have encountered storage issues. Prerequisites A running Cluster Operator Procedure Find the name of the Pod that you want to delete. ZooKeeper pods are named <cluster_name>-zookeeper-<index_number> , where <index_number> starts at zero and ends at the total number of replicas minus one. For example, my-cluster-zookeeper-0 . Use oc annotate to annotate the Pod resource in OpenShift: oc annotate pod <cluster_name>-zookeeper-<index_number> strimzi.io/delete-pod-and-pvc="true" Wait for the reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated. 10.4. Configuring node pools Update the spec properties of the KafkaNodePool custom resource to configure a node pool deployment. A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. Each pool has its own unique configuration, which includes mandatory settings for the number of replicas, roles, and storage allocation. Optionally, you can also specify values for the following properties: resources to specify memory and cpu requests and limits template to specify custom configuration for pods and other OpenShift resources jvmOptions to specify custom JVM configuration for heap size, runtime and other options The relationship between Kafka and KafkaNodePool resources is as follows: Kafka resources represent the configuration for all nodes in a Kafka cluster. KafkaNodePool resources represent the configuration for nodes only in the node pool. If a configuration property is not specified in KafkaNodePool , it is inherited from the Kafka resource. Configuration specified in the KafkaNodePool resource takes precedence if set in both resources. For example, if both the node pool and Kafka configuration includes jvmOptions , the values specified in the node pool configuration are used. When -Xmx: 1024m is set in KafkaNodePool.spec.jvmOptions and -Xms: 512m is set in Kafka.spec.kafka.jvmOptions , the node uses the value from its node pool configuration. Properties from Kafka and KafkaNodePool schemas are not combined. To clarify, if KafkaNodePool.spec.template includes only podSet.metadata.labels , and Kafka.spec.kafka.template includes podSet.metadata.annotations and pod.metadata.labels , the template values from the Kafka configuration are ignored since there is a template value in the node pool configuration. For a deeper understanding of the node pool configuration options, refer to the Streams for Apache Kafka Custom Resource API Reference . Example configuration for a node pool in a cluster using KRaft mode # Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kraft-dual-role 1 labels: strimzi.io/cluster: my-cluster 2 # Node pool specifications spec: # Replicas (required) replicas: 3 3 # Roles (required) roles: 4 - controller - broker # Storage configuration (required) storage: 5 type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # Resources requests and limits (recommended) resources: 6 requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12" 1 Unique name for the node pool. 2 The Kafka cluster the node pool belongs to. A node pool can only belong to a single cluster. 3 Number of replicas for the nodes. 4 Roles for the nodes in the node pool. In this example, the nodes have dual roles as controllers and brokers. 5 Storage specification for the nodes. 6 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. Note The configuration for the Kafka resource must be suitable for KRaft mode. Currently, KRaft mode has a number of limitations . Example configuration for a node pool in a cluster using ZooKeeper apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker 1 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12" 1 Roles for the nodes in the node pool, which can only be broker when using Kafka with ZooKeeper. 10.4.1. Assigning IDs to node pools for scaling operations This procedure describes how to use annotations for advanced node ID handling by the Cluster Operator when performing scaling operations on node pools. You specify the node IDs to use, rather than the Cluster Operator using the ID in sequence. Management of node IDs in this way gives greater control. To add a range of IDs, you assign the following annotations to the KafkaNodePool resource: strimzi.io/-node-ids to add a range of IDs that are used for new brokers strimzi.io/remove-node-ids to add a range of IDs for removing existing brokers You can specify an array of individual node IDs, ID ranges, or a combination of both. For example, you can specify the following range of IDs: [0, 1, 2, 10-20, 30] for scaling up the Kafka node pool. This format allows you to specify a combination of individual node IDs ( 0 , 1 , 2 , 30 ) as well as a range of IDs ( 10-20 ). In a typical scenario, you might specify a range of IDs for scaling up and a single node ID to remove a specific node when scaling down. In this procedure, we add the scaling annotations to node pools as follows: pool-a is assigned a range of IDs for scaling up pool-b is assigned a range of IDs for scaling down During the scaling operation, IDs are used as follows: Scale up picks up the lowest available ID in the range for the new node. Scale down removes the node with the highest available ID in the range. If there are gaps in the sequence of node IDs assigned in the node pool, the node to be added is assigned an ID that fills the gap. The annotations don't need to be updated after every scaling operation. Any unused IDs are still valid for the scaling event. The Cluster Operator allows you to specify a range of IDs in either ascending or descending order, so you can define them in the order the nodes are scaled. For example, when scaling up, you can specify a range such as [1000-1999] , and the new nodes are assigned the lowest IDs: 1000 , 1001 , 1002 , 1003 , and so on. Conversely, when scaling down, you can specify a range like [1999-1000] , ensuring that nodes with the highest IDs are removed: 1003 , 1002 , 1001 , 1000 , and so on. If you don't specify an ID range using the annotations, the Cluster Operator follows its default behavior for handling IDs during scaling operations. Node IDs start at 0 (zero) and run sequentially across the Kafka cluster. The lowest ID is assigned to a new node. Gaps to node IDs are filled across the cluster. This means that they might not run sequentially within a node pool. The default behavior for scaling up is to add the lowest available node ID across the cluster; and for scaling down, it is to remove the node in the node pool with the highest available node ID. The default approach is also applied if the assigned range of IDs is misformatted, the scaling up range runs out of IDs, or the scaling down range does not apply to any in-use nodes. Prerequisites The Cluster Operator must be deployed. (Optional) Use the reserved.broker-max.id configuration property to extend the allowable range for node IDs within your node pools. By default, Apache Kafka restricts node IDs to numbers ranging from 0 to 999. To use node ID values greater than 999, add the reserved.broker-max.id configuration property to the Kafka custom resource and specify the required maximum node ID value. In this example, the maximum node ID is set at 10000. Node IDs can then be assigned up to that value. Example configuration for the maximum node ID number apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: config: reserved.broker.max.id: 10000 # ... Procedure Annotate the node pool with the IDs to use when scaling up or scaling down, as shown in the following examples. IDs for scaling up are assigned to node pool pool-a : Assigning IDs for scaling up oc annotate kafkanodepool pool-a strimzi.io/-node-ids="[0,1,2,10-20,30]" The lowest available ID from this range is used when adding a node to pool-a . IDs for scaling down are assigned to node pool pool-b : Assigning IDs for scaling down oc annotate kafkanodepool pool-b strimzi.io/remove-node-ids="[60-50,9,8,7]" The highest available ID from this range is removed when scaling down pool-b . Note If you want to remove a specific node, you can assign a single node ID to the scaling down annotation: oc annotate kafkanodepool pool-b strimzi.io/remove-node-ids="[3]" . You can now scale the node pool. For more information, see the following: Section 10.4.3, "Adding nodes to a node pool" Section 10.4.4, "Removing nodes from a node pool" Section 10.4.5, "Moving nodes between node pools" On reconciliation, a warning is given if the annotations are misformatted. After you have performed the scaling operation, you can remove the annotation if it's no longer needed. Removing the annotation for scaling up oc annotate kafkanodepool pool-a strimzi.io/-node-ids- Removing the annotation for scaling down oc annotate kafkanodepool pool-b strimzi.io/remove-node-ids- 10.4.2. Impact on racks when moving nodes from node pools If rack awareness is enabled on a Kafka cluster, replicas can be spread across different racks, data centers, or availability zones. When moving nodes from node pools, consider the implications on the cluster topology, particularly regarding rack awareness. Removing specific pods from node pools, especially out of order, may break the cluster topology or cause an imbalance in distribution across racks. An imbalance can impact both the distribution of nodes themselves and the partition replicas within the cluster. An uneven distribution of nodes and partitions across racks can affect the performance and resilience of the Kafka cluster. Plan the removal of nodes strategically to maintain the required balance and resilience across racks. Use the strimzi.io/remove-node-ids annotation to move nodes with specific IDs with caution. Ensure that configuration to spread partition replicas across racks and for clients to consume from the closest replicas is not broken. Tip Use Cruise Control and the KafkaRebalance resource with the RackAwareGoal to make sure that replicas remain distributed across different racks. 10.4.3. Adding nodes to a node pool This procedure describes how to scale up a node pool to add new nodes. Currently, scale up is only possible for broker-only node pools containing nodes that run as dedicated brokers. In this procedure, we start with three nodes for node pool pool-a : Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 Node IDs are appended to the name of the node on creation. We add node my-cluster-pool-a-3 , which has a node ID of 3 . Note During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. Prerequisites The Cluster Operator must be deployed. Cruise Control is deployed with Kafka. (Optional) Auto-rebalancing is enabled . If auto-rebalancing is enabled, partition reassignment happens automatically during the node scaling process, so you don't need to manually initiate the reassignment through Cruise Control. (Optional) For scale up operations, you can specify the node IDs to use in the operation . If you have assigned a range of node IDs for the operation, the ID of the node being added is determined by the sequence of nodes given. If you have assigned a single node ID, a node is added with the specified ID. Otherwise, the lowest available node ID across the cluster is used. Procedure Create a new node in the node pool. For example, node pool pool-a has three replicas. We add a node by increasing the number of replicas: oc scale kafkanodepool pool-a --replicas=4 Check the status of the deployment and wait for the pods in the node pool to be created and ready ( 1/1 ). oc get pods -n <my_cluster_operator_namespace> Output shows four Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0 Reassign the partitions after increasing the number of nodes in the node pool. If auto-rebalancing is enabled, partitions are reassigned to new nodes automatically, so you can skip this step. If auto-rebalancing is not enabled, use the Cruise Control add-brokers mode to move partition replicas from existing brokers to the newly added brokers. Using Cruise Control to reassign partition replicas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # ... spec: mode: add-brokers brokers: [3] We are reassigning partitions to node my-cluster-pool-a-3 . The reassignment can take some time depending on the number of topics and partitions in the cluster. 10.4.4. Removing nodes from a node pool This procedure describes how to scale down a node pool to remove nodes. Currently, scale down is only possible for broker-only node pools containing nodes that run as dedicated brokers. In this procedure, we start with four nodes for node pool pool-a : Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0 Node IDs are appended to the name of the node on creation. We remove node my-cluster-pool-a-3 , which has a node ID of 3 . Note During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. Prerequisites The Cluster Operator must be deployed. Cruise Control is deployed with Kafka. (Optional) Auto-rebalancing is enabled . If auto-rebalancing is enabled, partition reassignment happens automatically during the node scaling process, so you don't need to manually initiate the reassignment through Cruise Control. (Optional) For scale down operations, you can specify the node IDs to use in the operation . If you have assigned a range of node IDs for the operation, the ID of the node being removed is determined by the sequence of nodes given. If you have assigned a single node ID, the node with the specified ID is removed. Otherwise, the node with the highest available ID in the node pool is removed. Procedure Reassign the partitions before decreasing the number of nodes in the node pool. If auto-rebalancing is enabled, partitions are moved off brokers that are going to be removed automatically, so you can skip this step. If auto-rebalancing is not enabled, use the Cruise Control remove-brokers mode to move partition replicas off the brokers that are going to be removed. Using Cruise Control to reassign partition replicas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # ... spec: mode: remove-brokers brokers: [3] We are reassigning partitions from node my-cluster-pool-a-3 . The reassignment can take some time depending on the number of topics and partitions in the cluster. After the reassignment process is complete, and the node being removed has no live partitions, reduce the number of Kafka nodes in the node pool. For example, node pool pool-a has four replicas. We remove a node by decreasing the number of replicas: oc scale kafkanodepool pool-a --replicas=3 Output shows three Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-0 1/1 Running 0 my-cluster-pool-b-kafka-1 1/1 Running 0 my-cluster-pool-b-kafka-2 1/1 Running 0 10.4.5. Moving nodes between node pools This procedure describes how to move nodes between source and target Kafka node pools without downtime. You create a new node on the target node pool and reassign partitions to move data from the old node on the source node pool. When the replicas on the new node are in-sync, you can delete the old node. In this procedure, we start with two node pools: pool-a with three replicas is the target node pool pool-b with four replicas is the source node pool We scale up pool-a , and reassign partitions and scale down pool-b , which results in the following: pool-a with four replicas pool-b with three replicas Currently, scaling is only possible for broker-only node pools containing nodes that run as dedicated brokers. Note During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. Prerequisites The Cluster Operator must be deployed. Cruise Control is deployed with Kafka. (Optional) Auto-rebalancing is enabled . If auto-rebalancing is enabled, partition reassignment happens automatically during the node scaling process, so you don't need to manually initiate the reassignment through Cruise Control. (Optional) For scale up and scale down operations, you can specify the range of node IDs to use . If you have assigned node IDs for the operation, the ID of the node being added or removed is determined by the sequence of nodes given. Otherwise, the lowest available node ID across the cluster is used when adding nodes; and the node with the highest available ID in the node pool is removed. Procedure Create a new node in the target node pool. For example, node pool pool-a has three replicas. We add a node by increasing the number of replicas: oc scale kafkanodepool pool-a --replicas=4 Check the status of the deployment and wait for the pods in the node pool to be created and ready ( 1/1 ). oc get pods -n <my_cluster_operator_namespace> Output shows four Kafka nodes in the source and target node pools NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0 my-cluster-pool-a-7 1/1 Running 0 my-cluster-pool-b-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0 my-cluster-pool-b-6 1/1 Running 0 Node IDs are appended to the name of the node on creation. We add node my-cluster-pool-a-7 , which has a node ID of 7 . If auto-rebalancing is enabled, partitions are reassigned to new nodes and moved off brokers that are going to be removed automatically, so you can skip the step. If auto-rebalancing is not enabled, reassign partitions before decreasing the number of nodes in the source node pool. Use the Cruise Control remove-brokers mode to move partition replicas off the brokers that are going to be removed. Using Cruise Control to reassign partition replicas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # ... spec: mode: remove-brokers brokers: [6] We are reassigning partitions from node my-cluster-pool-b-6 . The reassignment can take some time depending on the number of topics and partitions in the cluster. After the reassignment process is complete, reduce the number of Kafka nodes in the source node pool. For example, node pool pool-b has four replicas. We remove a node by decreasing the number of replicas: oc scale kafkanodepool pool-b --replicas=3 The node with the highest ID ( 6 ) within the pool is removed. Output shows three Kafka nodes in the source node pool NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-2 1/1 Running 0 my-cluster-pool-b-kafka-3 1/1 Running 0 my-cluster-pool-b-kafka-5 1/1 Running 0 10.4.6. Changing node pool roles Node pools can be used with Kafka clusters that operate in KRaft mode (using Kafka Raft metadata) or use ZooKeeper for metadata management. If you are using KRaft mode, you can specify roles for all nodes in the node pool to operate as brokers, controllers, or both. If you are using ZooKeeper, nodes must be set as brokers only. In certain circumstances you might want to change the roles assigned to a node pool. For example, you may have a node pool that contains nodes that perform dual broker and controller roles, and then decide to split the roles between two node pools. In this case, you create a new node pool with nodes that act only as brokers, and then reassign partitions from the dual-role nodes to the new brokers. You can then switch the old node pool to a controller-only role. You can also perform the reverse operation by moving from node pools with controller-only and broker-only roles to a node pool that contains nodes that perform dual broker and controller roles. In this case, you add the broker role to the existing controller-only node pool, reassign partitions from the broker-only nodes to the dual-role nodes, and then delete the broker-only node pool. When removing broker roles in the node pool configuration, keep in mind that Kafka does not automatically reassign partitions. Before removing the broker role, ensure that nodes changing to controller-only roles do not have any assigned partitions. If partitions are assigned, the change is prevented. No replicas must be left on the node before removing the broker role. The best way to reassign partitions before changing roles is to apply a Cruise Control optimization proposal in remove-brokers mode. For more information, see Section 21.3, "Generating optimization proposals" . 10.4.7. Transitioning to separate broker and controller roles This procedure describes how to transition to using node pools with separate roles. If your Kafka cluster is using a node pool with combined controller and broker roles, you can transition to using two node pools with separate roles. To do this, rebalance the cluster to move partition replicas to a node pool with a broker-only role, and then switch the old node pool to a controller-only role. In this procedure, we start with node pool pool-a , which has controller and broker roles: Dual-role node pool apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false # ... The node pool has three nodes: Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 Each node performs a combined role of broker and controller. We create a second node pool called pool-b , with three nodes that act as brokers only. Note During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. Prerequisites The Cluster Operator must be deployed. Cruise Control is deployed with Kafka. Procedure Create a node pool with a broker role. Example node pool configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # ... The new node pool also has three nodes. If you already have a broker-only node pool, you can skip this step. Apply the new KafkaNodePool resource to create the brokers. Check the status of the deployment and wait for the pods in the node pool to be created and ready ( 1/1 ). oc get pods -n <my_cluster_operator_namespace> Output shows pods running in two node pools NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0 Node IDs are appended to the name of the node on creation. Use the Cruise Control remove-brokers mode to reassign partition replicas from the dual-role nodes to the newly added brokers. Using Cruise Control to reassign partition replicas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # ... spec: mode: remove-brokers brokers: [0, 1, 2] The reassignment can take some time depending on the number of topics and partitions in the cluster. Note If nodes changing to controller-only roles have any assigned partitions, the change is prevented. The status.conditions of the Kafka resource provide details of events preventing the change. Remove the broker role from the node pool that originally had a combined role. Dual-role nodes switched to controllers apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false # ... Apply the configuration change so that the node pool switches to a controller-only role. 10.4.8. Transitioning to dual-role nodes This procedure describes how to transition from separate node pools with broker-only and controller-only roles to using a dual-role node pool. If your Kafka cluster is using node pools with dedicated controller and broker nodes, you can transition to using a single node pool with both roles. To do this, add the broker role to the controller-only node pool, rebalance the cluster to move partition replicas to the dual-role node pool, and then delete the old broker-only node pool. In this procedure, we start with two node pools pool-a , which has only the controller role and pool-b which has only the broker role: Single role node pools apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # ... --- apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # ... The Kafka cluster has six nodes: Kafka nodes in the node pools NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0 The pool-a nodes perform the role of controller. The pool-b nodes perform the role of broker. Note During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. Prerequisites The Cluster Operator must be deployed. Cruise Control is deployed with Kafka. Procedure Edit the node pool pool-a and add the broker role to it. Example node pool configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # ... Check the status and wait for the pods in the node pool to be restarted and ready ( 1/1 ). oc get pods -n <my_cluster_operator_namespace> Output shows pods running in two node pools NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0 Node IDs are appended to the name of the node on creation. Use the Cruise Control remove-brokers mode to reassign partition replicas from the broker-only nodes to the dual-role nodes. Using Cruise Control to reassign partition replicas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # ... spec: mode: remove-brokers brokers: [3, 4, 5] The reassignment can take some time depending on the number of topics and partitions in the cluster. Remove the pool-b node pool that has the old broker-only nodes. oc delete kafkanodepool pool-b -n <my_cluster_operator_namespace> 10.4.9. Migrating existing Kafka clusters to use Kafka node pools This procedure describes how to migrate existing Kafka clusters to use Kafka node pools. After you have updated the Kafka cluster, you can use the node pools to manage the configuration of nodes within each pool. Note Currently, replica and storage configuration in the KafkaNodePool resource must also be present in the Kafka resource. The configuration is ignored when node pools are being used. Prerequisites The Cluster Operator must be deployed. Procedure Create a new KafkaNodePool resource. Name the resource kafka . Point a strimzi.io/cluster label to your existing Kafka resource. Set the replica count and storage configuration to match your current Kafka cluster. Set the roles to broker . Example configuration for a node pool used in migrating a Kafka cluster apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kafka labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false Warning To preserve cluster data and the names of its nodes and resources, the node pool name must be kafka , and the strimzi.io/cluster label matches the Kafka resource name. Otherwise, nodes and resources are created with new names, including the persistent volume storage used by the nodes. Consequently, your data may not be available. Apply the KafkaNodePool resource: oc apply -f <node_pool_configuration_file> By applying this resource, you switch Kafka to using node pools. There is no change or rolling update and resources are identical to how they were before. Enable support for node pools in the Kafka resource using the strimzi.io/node-pools: enabled annotation. Example configuration for a node pool in a cluster using ZooKeeper apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster annotations: strimzi.io/node-pools: enabled spec: kafka: # ... zookeeper: # ... Apply the Kafka resource: oc apply -f <kafka_configuration_file> There is no change or rolling update. The resources remain identical to how they were before. Remove the replicated properties from the Kafka custom resource. When the KafkaNodePool resource is in use, you can remove the properties that you copied to the KafkaNodePool resource, such as the .spec.kafka.replicas and .spec.kafka.storage properties. Reversing the migration To revert to managing Kafka nodes using only Kafka custom resources: If you have multiple node pools, consolidate them into a single KafkaNodePool named kafka with node IDs from 0 to N (where N is the number of replicas). Ensure that the .spec.kafka configuration in the Kafka resource matches the KafkaNodePool configuration, including storage, resources, and replicas. Disable support for node pools in the Kafka resource using the strimzi.io/node-pools: disabled annotation. Delete the Kafka node pool named kafka . 10.5. Configuring Kafka storage Streams for Apache Kafka supports different Kafka storage options. You can choose between the following basic types: Ephemeral storage Ephemeral storage is temporary and only persists while a pod is running. When a pod is deleted, the data is lost, though data can be recovered in a highly available environment. Due to its transient nature, ephemeral storage is only recommended for development and testing environments. Persistent storage Persistent storage retains data across pod restarts and system disruptions, making it ideal for production environments. JBOD (Just a Bunch of Disks) storage allows you to configure your Kafka cluster to use multiple disks or volumes as ephemeral or persistent storage. JBOD storage (multiple volumes) When specifying JBOD storage, you must still decide between using ephemeral or persistent volumes for each disk. Even if you start with only one volume, using JBOD allows for future scaling by adding more volumes as needed, and that is why it is always recommended. Note Persistent, ephemeral, and JBOD storage types cannot be changed after a Kafka cluster is deployed. However, you can add or remove volumes of different types from the JBOD storage. You can also create and migrate to node pools with new storage specifications. Tiered storage (advanced) Tiered storage, currently available as an early access feature, provides additional flexibility for managing Kafka data by combining different storage types with varying performance and cost characteristics. It allows Kafka to offload older data to cheaper, long-term storage (such as object storage) while keeping recent, frequently accessed data on faster, more expensive storage (such as block storage). Tiered storage is an add-on capability. After configuring storage (ephemeral, persistent, or JBOD) for Kafka nodes, you can configure tiered storage at the cluster level and enable it for specific topics using the remote.storage.enable topic-level configuration. 10.5.1. Storage considerations Efficient data storage is essential for Streams for Apache Kafka to operate effectively, and block storage is strongly recommended. Streams for Apache Kafka has been tested only with block storage, and file storage solutions like NFS are not guaranteed to work. Common block storage types supported by OpenShift include: Cloud-based block storage solutions: Amazon EBS (for AWS) Azure Disk Storage (for Microsoft Azure) Persistent Disk (for Google Cloud) Persistent storage (for bare metal deployments) using local persistent volumes Storage Area Network (SAN) volumes accessed by protocols like Fibre Channel or iSCSI Note Streams for Apache Kafka does not require OpenShift raw block volumes. 10.5.1.1. File systems Kafka uses a file system for storing messages. Streams for Apache Kafka is compatible with the XFS and ext4 file systems, which are commonly used with Kafka. Consider the underlying architecture and requirements of your deployment when choosing and setting up your file system. For more information, refer to Filesystem Selection in the Kafka documentation. 10.5.1.2. Disk usage Solid-state drives (SSDs), though not essential, can improve the performance of Kafka in large clusters where data is sent to and received from multiple topics asynchronously. Note Replicated storage is not required, as Kafka provides built-in data replication. 10.5.2. Configuring Kafka storage in KRaft mode Use the storage properties of the KafkaNodePool custom resource to configure storage for a deployment of Kafka in KRaft mode. 10.5.2.1. Configuring ephemeral storage To use ephemeral storage, specify ephemeral as the storage type. Example configuration for ephemeral storage apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: my-node-pool labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: ephemeral # ... Ephemeral storage uses emptyDir volumes, which are created when a pod is assigned to a node. You can limit the size of the emptyDir volume with the sizeLimit property. The ephemeral volume used by Kafka brokers for log directories is mounted at /var/lib/kafka/data/kafka-log<pod_id> . Important Ephemeral storage is not suitable for Kafka topics with a replication factor of 1. For more information on ephemeral storage configuration options, see the EphemeralStorage schema reference . 10.5.2.2. Configuring persistent storage To use persistent storage, specify one of the following as the storage type: persistent-claim for a single persistent volume jbod for multiple persistent volumes in a Kafka cluster (Recommended for Kafka in a production environment) Example configuration for persistent storage apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: my-node-pool labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: persistent-claim size: 500Gi deleteClaim: true # ... Streams for Apache Kafka uses Persistent Volume Claims (PVCs) to request storage on persistent volumes (PVs). The PVC binds to a PV that meets the requested storage criteria, without needing to know the underlying storage infrastructure. PVCs created for Kafka pods follow the naming convention data-<kafka_cluster_name>-<pool_name>-<pod_id> , and the persistent volumes for Kafka logs are mounted at /var/lib/kafka/data/kafka-log<pod_id> . You can also specify custom storage classes ( StorageClass ) and volume selectors in the storage configuration. Example class and selector configuration # ... storage: type: persistent-claim size: 500Gi class: my-storage-class selector: hdd-type: ssd deleteClaim: true # ... Storage classes define storage profiles and dynamically provision persistent volumes (PVs) based on those profiles. This is useful, for example, when storage classes are restricted to different availability zones or data centers. If a storage class is not specified, the default storage class in the OpenShift cluster is used. Selectors specify persistent volumes that offer specific features, such as solid-state drive (SSD) volumes. For more information on persistent storage configuration options, see the PersistentClaimStorage schema reference . 10.5.2.3. Resizing persistent volumes Persistent volumes can be resized by changing the size storage property without any risk of data loss, as long as the storage infrastructure supports it. Following a configuration update to change the size of the storage, Streams for Apache Kafka instructs the storage infrastructure to make the change. Storage expansion is supported in Streams for Apache Kafka clusters that use persistent-claim volumes. Decreasing the size of persistent volumes is not supported in OpenShift. For more information about resizing persistent volumes in OpenShift, see Resizing Persistent Volumes using Kubernetes . After increasing the value of the size property, OpenShift increases the capacity of the selected persistent volumes in response to a request from the Cluster Operator. When the resizing is complete, the Cluster Operator restarts all pods that use the resized persistent volumes. This happens automatically. In this example, the volumes are increased to 2000Gi. Kafka configuration to increase volume size to 2000Gi apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: my-node-pool labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 2000Gi deleteClaim: false - id: 1 type: persistent-claim size: 2000Gi deleteClaim: false - id: 2 type: persistent-claim size: 2000Gi deleteClaim: false # ... Returning information on the PVs verifies the changes: oc get pv Storage capacity of PVs NAME CAPACITY CLAIM pvc-0ca459ce-... 2000Gi my-project/data-my-cluster-my-node-pool-2 pvc-6e1810be-... 2000Gi my-project/data-my-cluster-my-node-pool-0 pvc-82dc78c9-... 2000Gi my-project/data-my-cluster-my-node-pool-1 The output shows the names of each PVC associated with a broker pod. Note Storage reduction is only possible when using multiple disks per broker. You can remove a disk after moving all partitions on the disk to other volumes within the same broker (intra-broker) or to other brokers within the same cluster (intra-cluster). 10.5.2.4. Configuring JBOD storage To use JBOD storage, specify jbod as the storage type and add configuration for the JBOD volumes. JBOD volumes can be persistent or ephemeral, with the configuration options and constraints applicable to each type. Example configuration for JBOD storage apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: my-node-pool labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # ... PVCs are created for the JBOD volumes using the naming convention data-<volume_id>-<kafka_cluster_name>-<pool_name>-<pod_id> , and the JBOD volumes used for log directories are mounted at /var/lib/kafka/data-<volume_id>/kafka-log<pod_id> . 10.5.2.5. Adding or removing volumes from JBOD storage Volume IDs cannot be changed once JBOD volumes are created, though you can add or remove volumes. When adding a new volume to the to the volumes array under an id which was already used in the past and removed, make sure that the previously used PersistentVolumeClaims have been deleted. Use Cruise Control to reassign partitions when adding or removing volumes. For information on intra-broker disk balancing, see Section 21.1.3, "Tuning options for rebalances" . 10.5.2.6. Configuring KRaft metadata log storage In KRaft mode, each node (including brokers and controllers) stores a copy of the Kafka cluster's metadata log on one of its data volumes. By default, the log is stored on the volume with the lowest ID, but you can specify a different volume using the kraftMetadata property. For controller-only nodes, storage is exclusively for the metadata log. Since the log is always stored on a single volume, using JBOD storage with multiple volumes does not improve performance or increase available disk space. In contrast, broker nodes or nodes that combine broker and controller roles can share the same volume for both the metadata log and partition replica data, optimizing disk utilization. They can also use JBOD storage, where one volume is shared for the metadata log and partition replica data, while additional volumes are used solely for partition replica data. Changing the volume that stores the metadata log triggers a rolling update of the cluster nodes, involving the deletion of the old log and the creation of a new one in the specified location. If kraftMetadata isn't specified, adding a new volume with a lower ID also prompts an update and relocation of the metadata log. Example JBOD storage configuration using volume with ID 1 to store the KRaft metadata apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a # ... spec: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi kraftMetadata: shared deleteClaim: false # ... 10.5.2.7. Managing storage using node pools Storage management in Streams for Apache Kafka is usually straightforward, and requires little change when set up, but there might be situations where you need to modify your storage configurations. Node pools simplify this process, because you can set up separate node pools that specify your new storage requirements. In this procedure we create and manage storage for a node pool called pool-a containing three nodes. The steps require a scaling operation to add a new node pool. Currently, scaling is only possible for broker-only node pools containing nodes that run as dedicated brokers. We show how to change the storage class ( volumes.class ) that defines the type of persistent storage it uses. You can use the same steps to change the storage size ( volumes.size ). This approach is particularly useful if you want to reduce disk sizes. When increasing disk sizes, you have the option to dynamically resize persistent volumes . Note We strongly recommend using block storage. Streams for Apache Kafka is only tested for use with block storage. Prerequisites The Cluster Operator must be deployed. Cruise Control is deployed with Kafka. For storage that uses persistent volume claims for dynamic volume allocation, storage classes are defined and available in the OpenShift cluster that correspond to the storage solutions you need. Procedure Create the node pool with its own storage settings. For example, node pool pool-a uses JBOD storage with persistent volumes: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: roles: - broker replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: gp2-ebs # ... Nodes in pool-a are configured to use Amazon EBS (Elastic Block Store) GP2 volumes. Apply the node pool configuration for pool-a . Check the status of the deployment and wait for the pods in pool-a to be created and ready ( 1/1 ). oc get pods -n <my_cluster_operator_namespace> Output shows three Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 To migrate to a new storage class, create a new node pool with the required storage configuration: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: roles: - broker replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 1Ti class: gp3-ebs # ... Nodes in pool-b are configured to use Amazon EBS (Elastic Block Store) GP3 volumes. Apply the node pool configuration for pool-b . Check the status of the deployment and wait for the pods in pool-b to be created and ready. Reassign the partitions from pool-a to pool-b . When migrating to a new storage configuration, use the Cruise Control remove-brokers mode to move partition replicas off the brokers that are going to be removed. Using Cruise Control to reassign partition replicas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # ... spec: mode: remove-brokers brokers: [0, 1, 2] We are reassigning partitions from pool-a . The reassignment can take some time depending on the number of topics and partitions in the cluster. After the reassignment process is complete, delete the old node pool: oc delete kafkanodepool pool-a 10.5.2.8. Managing storage affinity using node pools In situations where storage resources, such as local persistent volumes, are constrained to specific worker nodes, or availability zones, configuring storage affinity helps to schedule pods to use the right nodes. Node pools allow you to configure affinity independently. In this procedure, we create and manage storage affinity for two availability zones: zone-1 and zone-2 . You can configure node pools for separate availability zones, but use the same storage class. We define an all-zones persistent storage class representing the storage resources available in each zone. We also use the .spec.template.pod properties to configure the node affinity and schedule Kafka pods on zone-1 and zone-2 worker nodes. The storage class and affinity is specified in node pools representing the nodes in each availability zone: pool-zone-1 pool-zone-2 . Prerequisites The Cluster Operator must be deployed. If you are not familiar with the concepts of affinity, see the Kubernetes node and pod affinity documentation . Procedure Define the storage class for use with each availability zone: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: all-zones provisioner: kubernetes.io/my-storage parameters: type: ssd volumeBindingMode: WaitForFirstConsumer Create node pools representing the two availability zones, specifying the all-zones storage class and the affinity for each zone: Node pool configuration for zone-1 apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-1 labels: strimzi.io/cluster: my-cluster spec: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-1 # ... Node pool configuration for zone-2 apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-2 labels: strimzi.io/cluster: my-cluster spec: replicas: 4 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-2 # ... Apply the node pool configuration. Check the status of the deployment and wait for the pods in the node pools to be created and ready ( 1/1 ). oc get pods -n <my_cluster_operator_namespace> Output shows 3 Kafka nodes in pool-zone-1 and 4 Kafka nodes in pool-zone-2 NAME READY STATUS RESTARTS my-cluster-pool-zone-1-kafka-0 1/1 Running 0 my-cluster-pool-zone-1-kafka-1 1/1 Running 0 my-cluster-pool-zone-1-kafka-2 1/1 Running 0 my-cluster-pool-zone-2-kafka-3 1/1 Running 0 my-cluster-pool-zone-2-kafka-4 1/1 Running 0 my-cluster-pool-zone-2-kafka-5 1/1 Running 0 my-cluster-pool-zone-2-kafka-6 1/1 Running 0 10.5.3. Configuring Kafka storage with ZooKeeper If you are using ZooKeeper, configure its storage in the Kafka resource. Depending on whether the deployment uses node pools, configure storage for the Kafka cluster in Kafka or KafkaNodePool resources. This section focuses only on ZooKeeper storage and Kafka storage configuration in the Kafka resource. For detailed information on Kafka storage, refer to the section describing storage configuration using node pools . The same configuration options for storage are available in the Kafka resource. Note Replicated storage is not required for ZooKeeper, as it has built-in data replication. 10.5.3.1. Configuring ephemeral storage To use ephemeral storage, specify ephemeral as the storage type. Example configuration for ephemeral storage apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: ephemeral zookeeper: storage: type: ephemeral # ... The ephemeral volume used by Kafka brokers for log directories is mounted at /var/lib/kafka/data/kafka-log<pod_id> . Important Ephemeral storage is unsuitable for single-node ZooKeeper clusters or Kafka topics with a replication factor of 1. 10.5.3.2. Configuring persistent storage The same persistent storage configuration options available for node pools can also be specified for Kafka in the Kafka resource. For more information, see the section on configuring Kafka storage using node pools . The size property can also be adjusted to resize persistent volumes . The storage type must always be persistent-claim for ZooKeeper, as it does not support JBOD storage. Example configuration for persistent storage apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: persistent-claim size: 500Gi deleteClaim: true # ... zookeeper: storage: type: persistent-claim size: 1000Gi PVCs created for Kafka pods when storage is configured in the Kafka resource use the naming convention data-<cluster_name>-kafka-<pod_id> , and the persistent volumes for Kafka logs are mounted at /var/lib/kafka/data/kafka-log<pod_id> . PVCs created for ZooKeeper follow the naming convention data-<cluster_name>-zookeeper-<pod_id> . Note As in KRaft mode, you can also specify custom storage classes and volume selectors. 10.5.3.3. Configuring JBOD storage ZooKeeper does not support JBOD storage, but Kafka nodes in a ZooKeeper-based cluster can still be configured to use JBOD storage. The same JBOD configuration options available for node pools can also be specified for Kafka in the Kafka resource. For more information, see the section on configuring Kafka storage using node pools . The volumes array can also be adjusted to add or remove volumes . Example configuration for JBOD storage apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: storage: type: persistent-claim size: 1000Gi 10.5.3.4. Migrating from storage class overrides (deprecated) The use of node pools to change the storage classes used by volumes replaces the deprecated overrides properties previously used for Kafka and ZooKeeper in the Kafka resource. Example storage configuration with class overrides apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: # ... kafka: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # ... # ... zookeeper: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # ... If you are using storage class overrides for Kafka, we encourage you to transition to using node pools instead. To migrate the existing configuration, follow these steps: Make sure you already use node pools resources. If not, you should migrate the cluster to use node pools first. Create new node pools with storage configuration using the desired storage class without using the overrides. Move all partition replicas from the old broker using the storage class overrides. You can do this using Cruise Control or using the partition reassignment tool . Delete the old node pool with the old brokers using the storage class overrides. 10.5.4. Tiered storage (early access) Tiered storage introduces a flexible approach to managing Kafka data whereby log segments are moved to a separate storage system. For example, you can combine the use of block storage on brokers for frequently accessed data and offload older or less frequently accessed data from the block storage to more cost-effective, scalable remote storage solutions, such as Amazon S3, without compromising data accessibility and durability. Warning Tiered storage is an early access Kafka feature, which is also available in Streams for Apache Kafka. Due to its current limitations , it is not recommended for production environments. Tiered storage requires an implementation of Kafka's RemoteStorageManager interface to handle communication between Kafka and the remote storage system, which is enabled through configuration of the Kafka resource. Streams for Apache Kafka uses Kafka's TopicBasedRemoteLogMetadataManager for Remote Log Metadata Management (RLMM) when custom tiered storage is enabled. The RLMM manages the metadata related to remote storage. To use custom tiered storage, do the following: Include a tiered storage plugin for Kafka in the Streams for Apache Kafka image by building a custom container image. The plugin must provide the necessary functionality for a Kafka cluster managed by Streams for Apache Kafka to interact with the tiered storage solution. Configure Kafka for tiered storage using tieredStorage properties in the Kafka resource. Specify the class name and path for the custom RemoteStorageManager implementation, as well as any additional configuration. If required, specify RLMM-specific tiered storage configuration. Example custom tiered storage configuration for Kafka apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: tieredStorage: type: custom 1 remoteStorageManager: 2 className: com.example.kafka.tiered.storage.s3.S3RemoteStorageManager classPath: /opt/kafka/plugins/tiered-storage-s3/* config: storage.bucket.name: my-bucket 3 # ... config: rlmm.config.remote.log.metadata.topic.replication.factor: 1 4 # ... 1 The type must be set to custom . 2 The configuration for the custom RemoteStorageManager implementation, including class name and path. 3 Configuration to pass to the custom RemoteStorageManager implementation, which Streams for Apache Kafka automatically prefixes with rsm.config. . 4 Tiered storage configuration to pass to the RLMM, which requires an rlmm.config. prefix. For more information on tiered storage configuration, see the Apache Kafka documentation . 10.6. Configuring the Entity Operator Use the entityOperator property in Kafka.spec to configure the Entity Operator. The Entity Operator is responsible for managing Kafka-related entities in a running Kafka cluster. It comprises the following operators: Topic Operator to manage Kafka topics User Operator to manage Kafka users By configuring the Kafka resource, the Cluster Operator can deploy the Entity Operator, including one or both operators. Once deployed, the operators are automatically configured to handle the topics and users of the Kafka cluster. Each operator can only monitor a single namespace. For more information, see Section 1.2.1, "Watching Streams for Apache Kafka resources in OpenShift namespaces" . The entityOperator property supports several sub-properties: topicOperator userOperator template The template property contains the configuration of the Entity Operator pod, such as labels, annotations, affinity, and tolerations. For more information on configuring templates, see Section 10.20, "Customizing OpenShift resources" . The topicOperator property contains the configuration of the Topic Operator. When this option is missing, the Entity Operator is deployed without the Topic Operator. The userOperator property contains the configuration of the User Operator. When this option is missing, the Entity Operator is deployed without the User Operator. For more information on the properties used to configure the Entity Operator, see the EntityOperatorSpec schema reference . Example of basic configuration enabling both operators apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: {} userOperator: {} If an empty object ( {} ) is used for the topicOperator and userOperator , all properties use their default values. When both topicOperator and userOperator properties are missing, the Entity Operator is not deployed. 10.6.1. Configuring the Topic Operator Use topicOperator properties in Kafka.spec.entityOperator to configure the Topic Operator. The following properties are supported: watchedNamespace The OpenShift namespace in which the Topic Operator watches for KafkaTopic resources. Default is the namespace where the Kafka cluster is deployed. reconciliationIntervalMs The interval between periodic reconciliations in milliseconds. Default 120000 . image The image property can be used to configure the container image which is used. To learn more, refer to the information provided on configuring the image property` . resources The resources property configures the amount of resources allocated to the Topic Operator. You can specify requests and limits for memory and cpu resources. The requests should be enough to ensure a stable performance of the operator. logging The logging property configures the logging of the Topic Operator. To learn more, refer to the information provided on Topic Operator logging . Example Topic Operator configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 resources: requests: cpu: "1" memory: 500Mi limits: cpu: "1" memory: 500Mi # ... 10.6.2. Configuring the User Operator Use userOperator properties in Kafka.spec.entityOperator to configure the User Operator. The following properties are supported: watchedNamespace The OpenShift namespace in which the User Operator watches for KafkaUser resources. Default is the namespace where the Kafka cluster is deployed. reconciliationIntervalMs The interval between periodic reconciliations in milliseconds. Default 120000 . image The image property can be used to configure the container image which will be used. To learn more, refer to the information provided on configuring the image property` . resources The resources property configures the amount of resources allocated to the User Operator. You can specify requests and limits for memory and cpu resources. The requests should be enough to ensure a stable performance of the operator. logging The logging property configures the logging of the User Operator. To learn more, refer to the information provided on User Operator logging . secretPrefix The secretPrefix property adds a prefix to the name of all Secrets created from the KafkaUser resource. For example, secretPrefix: kafka- would prefix all Secret names with kafka- . So a KafkaUser named my-user would create a Secret named kafka-my-user . Example User Operator configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-user-namespace reconciliationIntervalMs: 60000 resources: requests: cpu: "1" memory: 500Mi limits: cpu: "1" memory: 500Mi # ... 10.7. Configuring the Cluster Operator Use environment variables to configure the Cluster Operator. Specify the environment variables for the container image of the Cluster Operator in its Deployment configuration file. You can use the following environment variables to configure the Cluster Operator. If you are running Cluster Operator replicas in standby mode, there are additional environment variables for enabling leader election . Kafka, Kafka Connect, and Kafka MirrorMaker support multiple versions. Use their STRIMZI_<COMPONENT_NAME>_IMAGES environment variables to configure the default container images used for each version. The configuration provides a mapping between a version and an image. The required syntax is whitespace or comma-separated <version> = <image> pairs, which determine the image to use for a given version. For example, 3.9.0=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0. Theses default images are overridden if image property values are specified in the configuration of a component. For more information on image configuration of components, see the Streams for Apache Kafka Custom Resource API Reference . Note The Deployment configuration file provided with the Streams for Apache Kafka release artifacts is install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml . STRIMZI_NAMESPACE A comma-separated list of namespaces that the operator operates in. When not set, set to empty string, or set to * , the Cluster Operator operates in all namespaces. The Cluster Operator deployment might use the downward API to set this automatically to the namespace the Cluster Operator is deployed in. Example configuration for Cluster Operator namespaces env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace STRIMZI_FULL_RECONCILIATION_INTERVAL_MS Optional, default is 120000 ms. The interval between periodic reconciliations , in milliseconds. STRIMZI_OPERATION_TIMEOUT_MS Optional, default 300000 ms. The timeout for internal operations, in milliseconds. Increase this value when using Streams for Apache Kafka on clusters where regular OpenShift operations take longer than usual (due to factors such as prolonged download times for container images, for example). STRIMZI_ZOOKEEPER_ADMIN_SESSION_TIMEOUT_MS Optional, default 10000 ms. The session timeout for the Cluster Operator's ZooKeeper admin client, in milliseconds. Increase the value if ZooKeeper requests from the Cluster Operator are regularly failing due to timeout issues. There is a maximum allowed session time set on the ZooKeeper server side via the maxSessionTimeout config. By default, the maximum session timeout value is 20 times the default tickTime (whose default is 2000) at 40000 ms. If you require a higher timeout, change the maxSessionTimeout ZooKeeper server configuration value. STRIMZI_OPERATIONS_THREAD_POOL_SIZE Optional, default 10. The worker thread pool size, which is used for various asynchronous and blocking operations that are run by the Cluster Operator. STRIMZI_OPERATOR_NAME Optional, defaults to the pod's hostname. The operator name identifies the Streams for Apache Kafka instance when emitting OpenShift events . STRIMZI_OPERATOR_NAMESPACE The name of the namespace where the Cluster Operator is running. Do not configure this variable manually. Use the downward API. env: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace STRIMZI_OPERATOR_NAMESPACE_LABELS Optional. The labels of the namespace where the Streams for Apache Kafka Cluster Operator is running. Use namespace labels to configure the namespace selector in network policies . Network policies allow the Streams for Apache Kafka Cluster Operator access only to the operands from the namespace with these labels. When not set, the namespace selector in network policies is configured to allow access to the Cluster Operator from any namespace in the OpenShift cluster. env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 STRIMZI_POD_DISRUPTION_BUDGET_GENERATION Optional, default true . Pod disruption budget for resources. A pod disruption budget with the maxUnavailable value set to zero prevents OpenShift from evicting pods automatically. Set this environment variable to false to disable pod disruption budget generation. You might do this, for example, if you want to manage the pod disruption budgets yourself, or if you have a development environment where availability is not important. STRIMZI_LABELS_EXCLUSION_PATTERN Optional, default regex pattern is (^app.kubernetes.io/(?!part-of).*|^kustomize.toolkit.fluxcd.io.*) . The regex exclusion pattern used to filter labels propagation from the main custom resource to its subresources. The labels exclusion filter is not applied to labels in template sections such as spec.kafka.template.pod.metadata.labels . env: - name: STRIMZI_LABELS_EXCLUSION_PATTERN value: "^key1.*" STRIMZI_CUSTOM_<COMPONENT_NAME>_LABELS Optional. One or more custom labels to apply to all the pods created by the custom resource of the component. The Cluster Operator labels the pods when the custom resource is created or is reconciled. Labels can be applied to the following components: KAFKA KAFKA_CONNECT KAFKA_CONNECT_BUILD ZOOKEEPER ENTITY_OPERATOR KAFKA_MIRROR_MAKER2 KAFKA_MIRROR_MAKER CRUISE_CONTROL KAFKA_BRIDGE KAFKA_EXPORTER STRIMZI_CUSTOM_RESOURCE_SELECTOR Optional. The label selector to filter the custom resources handled by the Cluster Operator. The operator will operate only on those custom resources that have the specified labels set. Resources without these labels will not be seen by the operator. The label selector applies to Kafka , KafkaConnect , KafkaBridge , KafkaMirrorMaker , and KafkaMirrorMaker2 resources. KafkaRebalance and KafkaConnector resources are operated only when their corresponding Kafka and Kafka Connect clusters have the matching labels. env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2 STRIMZI_KAFKA_IMAGES Required. The mapping from the Kafka version to the corresponding image containing a Kafka broker for that version. For example 3.8.0=registry.redhat.io/amq-streams/kafka-38-rhel9:2.9.0, 3.9.0=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 . STRIMZI_KAFKA_CONNECT_IMAGES Required. The mapping from the Kafka version to the corresponding image of Kafka Connect for that version. For example 3.8.0=registry.redhat.io/amq-streams/kafka-38-rhel9:2.9.0, 3.9.0=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 . STRIMZI_KAFKA_MIRROR_MAKER2_IMAGES Required. The mapping from the Kafka version to the corresponding image of MirrorMaker 2 for that version. For example 3.8.0=registry.redhat.io/amq-streams/kafka-38-rhel9:2.9.0, 3.9.0=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 . (Deprecated) STRIMZI_KAFKA_MIRROR_MAKER_IMAGES Required. The mapping from the Kafka version to the corresponding image of MirrorMaker for that version. For example 3.8.0=registry.redhat.io/amq-streams/kafka-38-rhel9:2.9.0, 3.9.0=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 . STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE Optional. The default is registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.9.0 . The image name to use as the default when deploying the Topic Operator if no image is specified as the Kafka.spec.entityOperator.topicOperator.image in the Kafka resource. STRIMZI_DEFAULT_USER_OPERATOR_IMAGE Optional. The default is registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.9.0 . The image name to use as the default when deploying the User Operator if no image is specified as the Kafka.spec.entityOperator.userOperator.image in the Kafka resource. STRIMZI_DEFAULT_KAFKA_EXPORTER_IMAGE Optional. The default is registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 . The image name to use as the default when deploying the Kafka Exporter if no image is specified as the Kafka.spec.kafkaExporter.image in the Kafka resource. STRIMZI_DEFAULT_CRUISE_CONTROL_IMAGE Optional. The default is registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 . The image name to use as the default when deploying Cruise Control if no image is specified as the Kafka.spec.cruiseControl.image in the Kafka resource. STRIMZI_DEFAULT_KAFKA_BRIDGE_IMAGE Optional. The default is registry.redhat.io/amq-streams/bridge-rhel9:2.9.0 . The image name to use as the default when deploying the Kafka Bridge if no image is specified as the Kafka.spec.kafkaBridge.image in the Kafka resource. STRIMZI_DEFAULT_KAFKA_INIT_IMAGE Optional. The default is registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.9.0 . The image name to use as the default for the Kafka initializer container if no image is specified in the brokerRackInitImage of the Kafka resource or the clientRackInitImage of the Kafka Connect resource. The init container is started before the Kafka cluster for initial configuration work, such as rack support. STRIMZI_IMAGE_PULL_POLICY Optional. The ImagePullPolicy that is applied to containers in all pods managed by the Cluster Operator. The valid values are Always , IfNotPresent , and Never . If not specified, the OpenShift defaults are used. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_IMAGE_PULL_SECRETS Optional. A comma-separated list of Secret names. The secrets referenced here contain the credentials to the container registries where the container images are pulled from. The secrets are specified in the imagePullSecrets property for all pods created by the Cluster Operator. Changing this list results in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_KUBERNETES_VERSION Optional. Overrides the OpenShift version information detected from the API server. Example configuration for OpenShift version override env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64 KUBERNETES_SERVICE_DNS_DOMAIN Optional. Overrides the default OpenShift DNS domain name suffix. By default, services assigned in the OpenShift cluster have a DNS domain name that uses the default suffix cluster.local . For example, for broker kafka-0 : <cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc. cluster.local The DNS domain name is added to the Kafka broker certificates used for hostname verification. If you are using a different DNS domain name suffix in your cluster, change the KUBERNETES_SERVICE_DNS_DOMAIN environment variable from the default to the one you are using in order to establish a connection with the Kafka brokers. STRIMZI_CONNECT_BUILD_TIMEOUT_MS Optional, default 300000 ms. The timeout for building new Kafka Connect images with additional connectors, in milliseconds. Consider increasing this value when using Streams for Apache Kafka to build container images containing many connectors or using a slow container registry. STRIMZI_NETWORK_POLICY_GENERATION Optional, default true . Network policy for resources. Network policies allow connections between Kafka components. Set this environment variable to false to disable network policy generation. You might do this, for example, if you want to use custom network policies. Custom network policies allow more control over maintaining the connections between components. STRIMZI_DNS_CACHE_TTL Optional, default 30 . Number of seconds to cache successful name lookups in local DNS resolver. Any negative value means cache forever. Zero means do not cache, which can be useful for avoiding connection errors due to long caching policies being applied. STRIMZI_POD_SET_RECONCILIATION_ONLY Optional, default false . When set to true , the Cluster Operator reconciles only the StrimziPodSet resources and any changes to the other custom resources ( Kafka , KafkaConnect , and so on) are ignored. This mode is useful for ensuring that your pods are recreated if needed, but no other changes happen to the clusters. STRIMZI_FEATURE_GATES Optional. Enables or disables the features and functionality controlled by feature gates . STRIMZI_POD_SECURITY_PROVIDER_CLASS Optional. Configuration for the pluggable PodSecurityProvider class, which can be used to provide the security context configuration for Pods and containers. 10.7.1. Restricting access to the Cluster Operator using network policy Use the STRIMZI_OPERATOR_NAMESPACE_LABELS environment variable to establish network policy for the Cluster Operator using namespace labels. The Cluster Operator can run in the same namespace as the resources it manages, or in a separate namespace. By default, the STRIMZI_OPERATOR_NAMESPACE environment variable is configured to use the downward API to find the namespace the Cluster Operator is running in. If the Cluster Operator is running in the same namespace as the resources, only local access is required and allowed by Streams for Apache Kafka. If the Cluster Operator is running in a separate namespace to the resources it manages, any namespace in the OpenShift cluster is allowed access to the Cluster Operator unless network policy is configured. By adding namespace labels, access to the Cluster Operator is restricted to the namespaces specified. Network policy configured for the Cluster Operator deployment #... env: # ... - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 #... 10.7.2. Setting periodic reconciliation of custom resources Use the STRIMZI_FULL_RECONCILIATION_INTERVAL_MS variable to set the time interval for periodic reconciliations by the Cluster Operator. Replace its value with the required interval in milliseconds. Reconciliation period configured for the Cluster Operator deployment #... env: # ... - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: "120000" #... The Cluster Operator reacts to all notifications about applicable cluster resources received from the OpenShift cluster. If the operator is not running, or if a notification is not received for any reason, resources will get out of sync with the state of the running OpenShift cluster. In order to handle failovers properly, a periodic reconciliation process is executed by the Cluster Operator so that it can compare the state of the resources with the current cluster deployments in order to have a consistent state across all of them. Additional resources Downward API (as described in the Openshift Nodes guide) 10.7.3. Pausing reconciliation of custom resources using annotations Sometimes it is useful to pause the reconciliation of custom resources managed by Streams for Apache Kafka operators, so that you can perform fixes or make updates. If reconciliations are paused, any changes made to custom resources are ignored by the operators until the pause ends. If you want to pause reconciliation of a custom resource, set the strimzi.io/pause-reconciliation annotation to true in its configuration. This instructs the appropriate operator to pause reconciliation of the custom resource. For example, you can apply the annotation to the KafkaConnect resource so that reconciliation by the Cluster Operator is paused. You can also create a custom resource with the pause annotation enabled. The custom resource is created, but it is ignored. Prerequisites The Streams for Apache Kafka Operator that manages the custom resource is running. Procedure Annotate the custom resource in OpenShift, setting pause-reconciliation to true : oc annotate <kind_of_custom_resource> <name_of_custom_resource> strimzi.io/pause-reconciliation="true" For example, for the KafkaConnect custom resource: oc annotate KafkaConnect my-connect strimzi.io/pause-reconciliation="true" Check that the status conditions of the custom resource show a change to ReconciliationPaused : oc describe <kind_of_custom_resource> <name_of_custom_resource> The type condition changes to ReconciliationPaused at the lastTransitionTime . Example custom resource with a paused reconciliation condition type apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: "true" strimzi.io/use-connector-resources: "true" creationTimestamp: 2021-03-12T10:47:11Z #... spec: # ... status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: "True" type: ReconciliationPaused Resuming from pause To resume reconciliation, you can set the annotation to false , or remove the annotation. Additional resources Finding the status of a custom resource 10.7.4. Running multiple Cluster Operator replicas with leader election The default Cluster Operator configuration enables leader election to run multiple parallel replicas of the Cluster Operator. One replica is elected as the active leader and operates the deployed resources. The other replicas run in standby mode. When the leader stops or fails, one of the standby replicas is elected as the new leader and starts operating the deployed resources. By default, Streams for Apache Kafka runs with a single Cluster Operator replica that is always the leader replica. When a single Cluster Operator replica stops or fails, OpenShift starts a new replica. Running the Cluster Operator with multiple replicas is not essential. But it's useful to have replicas on standby in case of large-scale disruptions caused by major failure. For example, suppose multiple worker nodes or an entire availability zone fails. This failure might cause the Cluster Operator pod and many Kafka pods to go down at the same time. If subsequent pod scheduling causes congestion through lack of resources, this can delay operations when running a single Cluster Operator. 10.7.4.1. Enabling leader election for Cluster Operator replicas Configure leader election environment variables when running additional Cluster Operator replicas. The following environment variables are supported: STRIMZI_LEADER_ELECTION_ENABLED Optional, disabled ( false ) by default. Enables or disables leader election, which allows additional Cluster Operator replicas to run on standby. Note Leader election is disabled by default. It is only enabled when applying this environment variable on installation. STRIMZI_LEADER_ELECTION_LEASE_NAME Required when leader election is enabled. The name of the OpenShift Lease resource that is used for the leader election. STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE Required when leader election is enabled. The namespace where the OpenShift Lease resource used for leader election is created. You can use the downward API to configure it to the namespace where the Cluster Operator is deployed. env: - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace STRIMZI_LEADER_ELECTION_IDENTITY Required when leader election is enabled. Configures the identity of a given Cluster Operator instance used during the leader election. The identity must be unique for each operator instance. You can use the downward API to configure it to the name of the pod where the Cluster Operator is deployed. env: - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name STRIMZI_LEADER_ELECTION_LEASE_DURATION_MS Optional, default 15000 ms. Specifies the duration the acquired lease is valid. STRIMZI_LEADER_ELECTION_RENEW_DEADLINE_MS Optional, default 10000 ms. Specifies the period the leader should try to maintain leadership. STRIMZI_LEADER_ELECTION_RETRY_PERIOD_MS Optional, default 2000 ms. Specifies the frequency of updates to the lease lock by the leader. 10.7.4.2. Configuring Cluster Operator replicas To run additional Cluster Operator replicas in standby mode, you will need to increase the number of replicas and enable leader election. To configure leader election, use the leader election environment variables. To make the required changes, configure the following Cluster Operator installation files located in install/cluster-operator/ : 060-Deployment-strimzi-cluster-operator.yaml 022-ClusterRole-strimzi-cluster-operator-role.yaml 022-RoleBinding-strimzi-cluster-operator.yaml Leader election has its own ClusterRole and RoleBinding RBAC resources that target the namespace where the Cluster Operator is running, rather than the namespace it is watching. The default deployment configuration creates a Lease resource called strimzi-cluster-operator in the same namespace as the Cluster Operator. The Cluster Operator uses leases to manage leader election. The RBAC resources provide the permissions to use the Lease resource. If you use a different Lease name or namespace, update the ClusterRole and RoleBinding files accordingly. Prerequisites You need an account with permission to create and manage CustomResourceDefinition and RBAC ( ClusterRole , and RoleBinding ) resources. Procedure Edit the Deployment resource that is used to deploy the Cluster Operator, which is defined in the 060-Deployment-strimzi-cluster-operator.yaml file. Change the replicas property from the default (1) to a value that matches the required number of replicas. Increasing the number of Cluster Operator replicas apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 3 Check that the leader election env properties are set. If they are not set, configure them. To enable leader election, STRIMZI_LEADER_ELECTION_ENABLED must be set to true (default). In this example, the name of the lease is changed to my-strimzi-cluster-operator . Configuring leader election environment variables for the Cluster Operator # ... spec containers: - name: strimzi-cluster-operator # ... env: - name: STRIMZI_LEADER_ELECTION_ENABLED value: "true" - name: STRIMZI_LEADER_ELECTION_LEASE_NAME value: "my-strimzi-cluster-operator" - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name For a description of the available environment variables, see Section 10.7.4.1, "Enabling leader election for Cluster Operator replicas" . If you specified a different name or namespace for the Lease resource used in leader election, update the RBAC resources. (optional) Edit the ClusterRole resource in the 022-ClusterRole-strimzi-cluster-operator-role.yaml file. Update resourceNames with the name of the Lease resource. Updating the ClusterRole references to the lease apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi rules: - apiGroups: - coordination.k8s.io resourceNames: - my-strimzi-cluster-operator # ... (optional) Edit the RoleBinding resource in the 022-RoleBinding-strimzi-cluster-operator.yaml file. Update subjects.name and subjects.namespace with the name of the Lease resource and the namespace where it was created. Updating the RoleBinding references to the lease apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi subjects: - kind: ServiceAccount name: my-strimzi-cluster-operator namespace: myproject # ... Deploy the Cluster Operator: oc create -f install/cluster-operator -n myproject Check the status of the deployment: oc get deployments -n myproject Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 3/3 3 3 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows the correct number of replicas. 10.7.5. Configuring Cluster Operator HTTP proxy settings If you are running a Kafka cluster behind a HTTP proxy, you can still pass data in and out of the cluster. For example, you can run Kafka Connect with connectors that push and pull data from outside the proxy. Or you can use a proxy to connect with an authorization server. Configure the Cluster Operator deployment to specify the proxy environment variables. The Cluster Operator accepts standard proxy configuration ( HTTP_PROXY , HTTPS_PROXY and NO_PROXY ) as environment variables. The proxy settings are applied to all Streams for Apache Kafka containers. The format for a proxy address is http://<ip_address>:<port_number>. To set up a proxy with a name and password, the format is http://<username>:<password>@<ip-address>:<port_number>. Prerequisites You need an account with permission to create and manage CustomResourceDefinition and RBAC ( ClusterRole , and RoleBinding ) resources. Procedure To add proxy environment variables to the Cluster Operator, update its Deployment configuration ( install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml ). Example proxy configuration for the Cluster Operator apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: # ... env: # ... - name: "HTTP_PROXY" value: "http://proxy.com" 1 - name: "HTTPS_PROXY" value: "https://proxy.com" 2 - name: "NO_PROXY" value: "internal.com, other.domain.com" 3 # ... 1 Address of the proxy server. 2 Secure address of the proxy server. 3 Addresses for servers that are accessed directly as exceptions to the proxy server. The URLs are comma-separated. Alternatively, edit the Deployment directly: oc edit deployment strimzi-cluster-operator If you updated the YAML file instead of editing the Deployment directly, apply the changes: oc create -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml Additional resources Host aliases Designating Streams for Apache Kafka administrators 10.7.6. Disabling FIPS mode using Cluster Operator configuration Streams for Apache Kafka automatically switches to FIPS mode when running on a FIPS-enabled OpenShift cluster. Disable FIPS mode by setting the FIPS_MODE environment variable to disabled in the deployment configuration for the Cluster Operator. With FIPS mode disabled, Streams for Apache Kafka automatically disables FIPS in the OpenJDK for all components. With FIPS mode disabled, Streams for Apache Kafka is not FIPS compliant. The Streams for Apache Kafka operators, as well as all operands, run in the same way as if they were running on an OpenShift cluster without FIPS enabled. Procedure To disable the FIPS mode in the Cluster Operator, update its Deployment configuration ( install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml ) and add the FIPS_MODE environment variable. Example FIPS configuration for the Cluster Operator apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: # ... env: # ... - name: "FIPS_MODE" value: "disabled" 1 # ... 1 Disables the FIPS mode. Alternatively, edit the Deployment directly: oc edit deployment strimzi-cluster-operator If you updated the YAML file instead of editing the Deployment directly, apply the changes: oc apply -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml 10.8. Configuring Kafka Connect Update the spec properties of the KafkaConnect custom resource to configure your Kafka Connect deployment. Use Kafka Connect to set up external data connections to your Kafka cluster. Use the properties of the KafkaConnect resource to configure your Kafka Connect deployment. You can also use the KafkaConnect resource to specify the following: Connector plugin configuration to build a container image that includes the plugins to make connections Configuration for the Kafka Connect worker pods that run connectors An annotation to enable use of the KafkaConnector resource to manage connectors The Cluster Operator manages Kafka Connect clusters deployed using the KafkaConnect resource and connectors created using the KafkaConnector resource. For a deeper understanding of the Kafka Connect cluster configuration options, refer to the Streams for Apache Kafka Custom Resource API Reference . Handling high volumes of messages You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages . Example KafkaConnect custom resource configuration # Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect 1 metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 2 # Deployment specifications spec: # Replicas (required) replicas: 3 3 # Bootstrap servers (required) bootstrapServers: my-cluster-kafka-bootstrap:9092 4 # Kafka Connect configuration (recommended) config: 5 group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 # Resources requests and limits (recommended) resources: 6 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi # Authentication (optional) authentication: 7 type: tls certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source # TLS configuration (optional) tls: 8 trustedCertificates: - secretName: my-cluster-cluster-cert pattern: "*.crt" - secretName: my-cluster-cluster-cert pattern: "*.crt" # Build configuration (optional) build: 9 output: 10 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 11 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> # Logging configuration (optional) logging: 12 type: inline loggers: log4j.rootLogger: INFO # Readiness probe (optional) readinessProbe: 13 initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # Metrics configuration (optional) metricsConfig: 14 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key # JVM options (optional) jvmOptions: 15 "-Xmx": "1g" "-Xms": "1g" # Custom image (optional) image: my-org/my-image:latest 16 # Rack awareness (optional) rack: topologyKey: topology.kubernetes.io/zone 17 # Pod and container template (optional) template: 18 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" connectContainer: 19 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey # Tracing configuration (optional) tracing: type: opentelemetry 20 1 Use KafkaConnect . 2 Enables the use of KafkaConnector resources to start, stop, and manage connector instances. 3 The number of replica nodes for the workers that run tasks. 4 Bootstrap address for connection to the Kafka cluster. The address takes the format <cluster_name>-kafka-bootstrap:<port_number> . The Kafka cluster doesn't need to be managed by Streams for Apache Kafka or deployed to an OpenShift cluster. 5 Kafka Connect configuration of workers (not connectors) that run connectors and their tasks. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by Streams for Apache Kafka. In this example, JSON convertors are specified. A replication factor of 3 is set for the internal topics used by Kafka Connect (minimum requirement for production environment). Changing the replication factor after the topics have been created has no effect. 6 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 7 Authentication for the Kafka Connect cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. By default, Kafka Connect connects to Kafka brokers using a plain text connection. 8 TLS configuration for encrypted connections to the Kafka cluster, with trusted certificates stored in X.509 format within the specified secrets. 9 Build configuration properties for building a container image with connector plugins automatically. 10 (Required) Configuration of the container registry where new images are pushed. 11 (Required) List of connector plugins and their artifacts to add to the new container image. Each plugin must be configured with at least one artifact . 12 Specified Kafka Connect loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties or log4j2.properties key in the ConfigMap. For the Kafka Connect log4j.rootLogger logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 13 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 14 Prometheus metrics, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter in this example. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key . 15 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka Connect. 16 ADVANCED OPTION: Container image configuration, which is recommended only in special situations. 17 SPECIALIZED OPTION: Rack awareness configuration for the deployment. This is a specialized option intended for a deployment within the same location, not across regions. Use this option if you want connectors to consume from the closest replica rather than the leader replica. In certain cases, consuming from the closest replica can improve network utilization or reduce costs . The topologyKey must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standard topology.kubernetes.io/zone label. To consume from the closest replica, enable the RackAwareReplicaSelector in the Kafka broker configuration. 18 Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 19 Environment variables are set for distributed tracing and to pass credentials to connectors. 20 Distributed tracing is enabled by using OpenTelemetry. 10.8.1. Configuring Kafka Connect for multiple instances By default, Streams for Apache Kafka configures the group ID and names of the internal topics used by Kafka Connect. When running multiple instances of Kafka Connect, you must change these default settings using the following config properties: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # ... # ... 1 The Kafka Connect cluster group ID within Kafka. 2 Kafka topic that stores connector offsets. 3 Kafka topic that stores connector and task status configurations. 4 Kafka topic that stores connector and task status updates. Note Values for the three topics must be the same for all instances with the same group.id . Unless you modify these default settings, each instance connecting to the same Kafka cluster is deployed with the same values. In practice, this means all instances form a cluster and use the same internal topics. Multiple instances attempting to use the same internal topics will cause unexpected errors, so you must change the values of these properties for each instance. 10.8.2. Configuring Kafka Connect user authorization When using authorization in Kafka, a Kafka Connect user requires read/write access to the cluster group and internal topics of Kafka Connect. This procedure outlines how access is granted using simple authorization and ACLs. Properties for the Kafka Connect cluster group ID and internal topics are configured by Streams for Apache Kafka by default. Alternatively, you can define them explicitly in the spec of the KafkaConnect resource. This is useful when configuring Kafka Connect for multiple instances , as the values for the group ID and topics must differ when running multiple Kafka Connect instances. Simple authorization uses ACL rules managed by the Kafka AclAuthorizer and StandardAuthorizer plugins to ensure appropriate access levels. For more information on configuring a KafkaUser resource to use simple authorization, see the AclRule schema reference . Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the authorization property in the KafkaUser resource to provide access rights to the user. Access rights are configured for the Kafka Connect topics and cluster group using literal name values. The following table shows the default names configured for the topics and cluster group ID. Table 10.2. Names for the access rights configuration Property Name offset.storage.topic connect-cluster-offsets status.storage.topic connect-cluster-status config.storage.topic connect-cluster-configs group connect-cluster In this example configuration, the default names are used to specify access rights. If you are using different names for a Kafka Connect instance, use those names in the ACLs configuration. Example configuration for simple authorization apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... authorization: type: simple acls: # access to offset.storage.topic - resource: type: topic name: connect-cluster-offsets patternType: literal operations: - Create - Describe - Read - Write host: "*" # access to status.storage.topic - resource: type: topic name: connect-cluster-status patternType: literal operations: - Create - Describe - Read - Write host: "*" # access to config.storage.topic - resource: type: topic name: connect-cluster-configs patternType: literal operations: - Create - Describe - Read - Write host: "*" # cluster group - resource: type: group name: connect-cluster patternType: literal operations: - Read host: "*" Create or update the resource. oc apply -f KAFKA-USER-CONFIG-FILE 10.9. Configuring Kafka Connect connectors The KafkaConnector resource provides an OpenShift-native approach to management of connectors by the Cluster Operator. To create, delete, or reconfigure connectors with KafkaConnector resources, you must set the use-connector-resources annotation to true in your KafkaConnect custom resource. Annotation to enable KafkaConnectors apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" # ... When the use-connector-resources annotation is enabled in your KafkaConnect configuration, you must define and manage connectors using KafkaConnector resources. Note Alternatively, you can manage connectors using the Kafka Connect REST API instead of KafkaConnector resources. To use the API, you must remove the strimzi.io/use-connector-resources annotation to use KafkaConnector resources in the KafkaConnect the resource. KafkaConnector resources provide the configuration needed to create connectors within a Kafka Connect cluster, which interacts with a Kafka cluster as specified in the KafkaConnect configuration. The Kafka cluster does not need to be managed by Streams for Apache Kafka or deployed to an OpenShift cluster. Kafka components contained in the same OpenShift cluster The configuration also specifies how the connector instances interact with external data systems, including any required authentication methods. Additionally, you must define the data to watch. For example, in a source connector that reads data from a database, the configuration might include the database name. You can also define where this data should be placed in Kafka by specifying the target topic name. Use the tasksMax property to specify the maximum number of tasks. For instance, a source connector with tasksMax: 2 might split the import of source data into two tasks. Example source connector configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 autoRestart: 5 enabled: true config: 6 file: "/opt/kafka/LICENSE" 7 topic: my-topic 8 # ... 1 Name of the KafkaConnector resource, which is used as the name of the connector. Use any name that is valid for an OpenShift resource. 2 Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to. 3 Full name of the connector class. This should be present in the image being used by the Kafka Connect cluster. 4 Maximum number of Kafka Connect tasks that the connector can create. 5 Enables automatic restarts of failed connectors and tasks. By default, the number of restarts is indefinite, but you can set a maximum on the number of automatic restarts using the maxRestarts property. 6 Connector configuration as key-value pairs. 7 Location of the external data file. In this example, we're configuring the FileStreamSourceConnector to read from the /opt/kafka/LICENSE file. 8 Kafka topic to publish the source data to. To include external connector configurations, such as user access credentials stored in a secret, use the template property of the KafkaConnect resource. You can also load values using configuration providers . 10.9.1. Manually stopping or pausing Kafka Connect connectors If you are using KafkaConnector resources to configure connectors, use the state configuration to either stop or pause a connector. In contrast to the paused state, where the connector and tasks remain instantiated, stopping a connector retains only the configuration, with no active processes. Stopping a connector from running may be more suitable for longer durations than just pausing. While a paused connector is quicker to resume, a stopped connector has the advantages of freeing up memory and resources. Note The state configuration replaces the (deprecated) pause configuration in the KafkaConnectorSpec schema, which allows pauses on connectors. If you were previously using the pause configuration to pause connectors, we encourage you to transition to using the state configuration only to avoid conflicts. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaConnector custom resource that controls the connector you want to pause or stop: oc get KafkaConnector Edit the KafkaConnector resource to stop or pause the connector. Example configuration for stopping a Kafka Connect connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector tasksMax: 2 config: file: "/opt/kafka/LICENSE" topic: my-topic state: stopped # ... Change the state configuration to stopped or paused . The default state for the connector when this property is not set is running . Apply the changes to the KafkaConnector configuration. You can resume the connector by changing state to running or removing the configuration. Note Alternatively, you can expose the Kafka Connect API and use the stop and pause endpoints to stop a connector from running. For example, PUT /connectors/<connector_name>/stop . You can then use the resume endpoint to restart it. 10.9.2. Manually restarting Kafka Connect connectors If you are using KafkaConnector resources to manage connectors, use the strimzi.io/restart annotation to manually trigger a restart of a connector. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaConnector custom resource that controls the Kafka connector you want to restart: oc get KafkaConnector Restart the connector by annotating the KafkaConnector resource in OpenShift: oc annotate KafkaConnector <kafka_connector_name> strimzi.io/restart="true" The restart annotation is set to true . Wait for the reconciliation to occur (every two minutes by default). The Kafka connector is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the KafkaConnector custom resource. 10.9.3. Manually restarting Kafka Connect connector tasks If you are using KafkaConnector resources to manage connectors, use the strimzi.io/restart-task annotation to manually trigger a restart of a connector task. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaConnector custom resource that controls the Kafka connector task you want to restart: oc get KafkaConnector Find the ID of the task to be restarted from the KafkaConnector custom resource: oc describe KafkaConnector <kafka_connector_name> Task IDs are non-negative integers, starting from 0. Use the ID to restart the connector task by annotating the KafkaConnector resource in OpenShift: oc annotate KafkaConnector <kafka_connector_name> strimzi.io/restart-task="0" In this example, task 0 is restarted. Wait for the reconciliation to occur (every two minutes by default). The Kafka connector task is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the KafkaConnector custom resource. 10.9.4. Listing connector offsets To track connector offsets using KafkaConnector resources, add the listOffsets configuration. The offsets, which keep track of the flow of data, are written to a config map specified in the configuration. If the config map does not exist, Streams for Apache Kafka creates it. After the configuration is in place, annotate the KafkaConnector resource to write the list to the config map. Sink connectors use Kafka's standard consumer offset mechanism, while source connectors store offsets in a custom format within a Kafka topic. For sink connectors, the list shows Kafka topic partitions and the last committed offset for each partition. For source connectors, the list shows the source system's partition and the last offset processed. Prerequisites The Cluster Operator is running. Procedure Edit the KafkaConnector resource for the connector to include the listOffsets configuration. Example configuration to list offsets apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: listOffsets: toConfigMap: 1 name: my-connector-offsets 2 # ... 1 The reference to the config map where the list of offsets will be written to. 2 The name of the config map, which is named my-connector-offsets in this example. Run the command to write the list to the config map by annotating the KafkaConnector resource: oc annotate kafkaconnector my-source-connector strimzi.io/connector-offsets=list -n <namespace> The annotation remains until either the list operation succeeds or it is manually removed from the resource. After the KafkaConnector resource is updated, use the following command to check if the config map with the offsets was created: oc get configmap my-connector-offsets -n <namespace> Inspect the contents of the config map to verify the offsets are being listed: oc describe configmap my-connector-offsets -n <namespace> Streams for Apache Kafka puts the offset information into the offsets.json property. This does not overwrite any other properties when updating an existing config map. Example source connector offset list apiVersion: v1 kind: ConfigMap metadata: # ... ownerReferences: 1 - apiVersion: kafka.strimzi.io/v1beta2 blockOwnerDeletion: false controller: false kind: KafkaConnector name: my-source-connector uid: 637e3be7-bd96-43ab-abde-c55b4c4550e0 resourceVersion: "66951" uid: 641d60a9-36eb-4f29-9895-8f2c1eb9638e data: offsets.json: |- { "offsets" : [ { "partition" : { "filename" : "/data/myfile.txt" 2 }, "offset" : { "position" : 15295 3 } } ] } 1 The owner reference pointing to the KafkaConnector resource for the source connector. To provide a custom owner reference, create the config map in advance and set the owner reference. 2 The source partition, represented by the filename /data/myfile.txt in this example for a file-based connector. 3 The last processed offset position in the source partition. Example sink connector offset list apiVersion: v1 kind: ConfigMap metadata: # ... ownerReferences: 1 - apiVersion: kafka.strimzi.io/v1beta2 blockOwnerDeletion: false controller: false kind: KafkaConnector name: my-sink-connector uid: 84a29d7f-77e6-43ac-bfbb-719f9b9a4b3b resourceVersion: "79241" uid: 721e30bc-23df-41a2-9b48-fb2b7d9b042c data: offsets.json: |- { "offsets": [ { "partition": { "kafka_topic": "my-topic", 2 "kafka_partition": 2 3 }, "offset": { "kafka_offset": 4 4 } } ] } 1 The owner reference pointing to the KafkaConnector resource for the sink connector. 2 The Kafka topic that the sink connector is consuming from. 3 The partition of the Kafka topic. 4 The last committed Kafka offset for this topic and partition. 10.9.5. Altering connector offsets To alter connector offsets using KafkaConnector resources, configure the resource to stop the connector and add alterOffsets configuration to specify the offset changes in a config map. You can reuse the same config map used to list offsets . After the connector is stopped and the configuration is in place, annotate the KafkaConnector resource to apply the offset alteration, then restart the connector. Altering connector offsets can be useful, for example, to skip a poison record or replay a record. In this procedure, we alter the offset position for a source connector named my-source-connector . Prerequisites The Cluster Operator is running. Procedure Edit the KafkaConnector resource to stop the connector and include the alterOffsets configuration. Example configuration to stop a connector and alter offsets apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: state: stopped 1 alterOffsets: fromConfigMap: 2 name: my-connector-offsets 3 # ... 1 Changes the state of the connector to stopped . The default state for the connector when this property is not set is running . 2 The reference to the config map that provides the update. 3 The name of the config map, which is named my-connector-offsets in this example. Edit the config map to make the alteration. In this example, we're resetting the offset position for a source connector to 15000. Example source connector offset list configuration apiVersion: v1 kind: ConfigMap metadata: # ... data: offsets.json: |- 1 { "offsets" : [ { "partition" : { "filename" : "/data/myfile.txt" }, "offset" : { "position" : 15000 2 } } ] } 1 Edits must be made within the offsets.json property. 2 The updated offset position in the source partition. Run the command to update the offset position by annotating the KafkaConnector resource: oc annotate kafkaconnector my-source-connector strimzi.io/connector-offsets=alter -n <namespace> The annotation remains until either the update operation succeeds or it is manually removed from the resource. Check the changes by using the procedure to list connector offsets . Restart the connector by changing the state to running . Example configuration to start a connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: state: running # ... 10.9.6. Resetting connector offsets To reset connector offsets using KafkaConnector resources, configure the resource to stop the connector. After the connector is stopped, annotate the KafkaConnector resource to clear the offsets, then restart the connector. In this procedure, we reset the offset position for a source connector named my-source-connector . Prerequisites The Cluster Operator is running. Procedure Edit the KafkaConnector resource to stop the connector. Example configuration to stop a connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: # ... state: stopped 1 # ... 1 Changes the state of the connector to stopped . The default state for the connector when this property is not set is running . Run the command to reset the offset position by annotating the KafkaConnector resource: oc annotate kafkaconnector my-source-connector strimzi.io/connector-offsets=reset -n <namespace> The annotation remains until either the reset operation succeeds or it is manually removed from the resource. Check the changes by using the procedure to list connector offsets . After resetting, the offsets.json property is empty. Example source connector offset list apiVersion: v1 kind: ConfigMap metadata: # ... data: offsets.json: |- { "offsets" : [] } Restart the connector by changing the state to running . Example configuration to start a connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: state: running # ... 10.10. Configuring Kafka MirrorMaker 2 Update the spec properties of the KafkaMirrorMaker2 custom resource to configure your MirrorMaker 2 deployment. MirrorMaker 2 uses source cluster configuration for data consumption and target cluster configuration for data output. MirrorMaker 2 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters. You configure MirrorMaker 2 to define the Kafka Connect deployment, including the connection details of the source and target clusters, and then run a set of MirrorMaker 2 connectors to make the connection. MirrorMaker 2 supports topic configuration synchronization between the source and target clusters. You specify source topics in the MirrorMaker 2 configuration. MirrorMaker 2 monitors the source topics. MirrorMaker 2 detects and propagates changes to the source topics to the remote topics. Changes might include automatically creating missing topics and partitions. Note In most cases you write to local topics and read from remote topics. Though write operations are not prevented on remote topics, they should be avoided. The configuration must specify: Each Kafka cluster Connection information for each cluster, including authentication The replication flow and direction Cluster to cluster Topic to topic For a deeper understanding of the Kafka MirrorMaker 2 cluster configuration options, refer to the Streams for Apache Kafka Custom Resource API Reference . Note MirrorMaker 2 resource configuration differs from the version of MirrorMaker, which is now deprecated. There is currently no legacy support, so any resources must be manually converted into the new format. Default configuration MirrorMaker 2 provides default configuration values for properties such as replication factors. A minimal configuration, with defaults left unchanged, would be something like this example: Minimal configuration for MirrorMaker 2 apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.9.0 connectCluster: "my-cluster-target" clusters: - alias: "my-cluster-source" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: "my-cluster-target" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: {} You can configure access control for source and target clusters using mTLS or SASL authentication. This procedure shows a configuration that uses TLS encryption and mTLS authentication for the source and target cluster. You can specify the topics and consumer groups you wish to replicate from a source cluster in the KafkaMirrorMaker2 resource. You use the topicsPattern and groupsPattern properties to do this. You can provide a list of names or use a regular expression. By default, all topics and consumer groups are replicated if you do not set the topicsPattern and groupsPattern properties. You can also replicate all topics and consumer groups by using ".*" as a regular expression. However, try to specify only the topics and consumer groups you need to avoid causing any unnecessary extra load on the cluster. Handling high volumes of messages You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages . Example KafkaMirrorMaker2 custom resource configuration # Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 # Deployment specifications spec: # Replicas (required) replicas: 3 1 # Connect cluster name (required) connectCluster: "my-cluster-target" 2 # Cluster configurations (required) clusters: 3 - alias: "my-cluster-source" 4 # Authentication (optional) authentication: 5 certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source type: tls bootstrapServers: my-cluster-source-kafka-bootstrap:9092 6 # TLS configuration (optional) tls: 7 trustedCertificates: - pattern: "*.crt" secretName: my-cluster-source-cluster-ca-cert - alias: "my-cluster-target" 8 # Authentication (optional) authentication: 9 certificateAndKey: certificate: target.crt key: target.key secretName: my-user-target type: tls bootstrapServers: my-cluster-target-kafka-bootstrap:9092 10 # Kafka Connect configuration (optional) config: 11 config.storage.replication.factor: 1 offset.storage.replication.factor: 1 status.storage.replication.factor: 1 # TLS configuration (optional) tls: 12 trustedCertificates: - pattern: "*.crt" secretName: my-cluster-target-cluster-ca-cert # Mirroring configurations (required) mirrors: 13 - sourceCluster: "my-cluster-source" 14 targetCluster: "my-cluster-target" 15 # Topic and group patterns (required) topicsPattern: "topic1|topic2|topic3" 16 groupsPattern: "group1|group2|group3" 17 # Source connector configuration (required) sourceConnector: 18 tasksMax: 10 19 autoRestart: 20 enabled: true config: replication.factor: 1 21 offset-syncs.topic.replication.factor: 1 22 sync.topic.acls.enabled: "false" 23 refresh.topics.interval.seconds: 60 24 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" 25 # Heartbeat connector configuration (optional) heartbeatConnector: 26 autoRestart: enabled: true config: heartbeats.topic.replication.factor: 1 27 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" # Checkpoint connector configuration (optional) checkpointConnector: 28 autoRestart: enabled: true config: checkpoints.topic.replication.factor: 1 29 refresh.groups.interval.seconds: 600 30 sync.group.offsets.enabled: true 31 sync.group.offsets.interval.seconds: 60 32 emit.checkpoints.interval.seconds: 60 33 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" # Kafka version (recommended) version: 3.9.0 34 # Resources requests and limits (recommended) resources: 35 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi # Logging configuration (optional) logging: 36 type: inline loggers: connect.root.logger.level: INFO # Readiness probe (optional) readinessProbe: 37 initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # JVM options (optional) jvmOptions: 38 "-Xmx": "1g" "-Xms": "1g" # Custom image (optional) image: my-org/my-image:latest 39 # Rack awareness (optional) rack: topologyKey: topology.kubernetes.io/zone 40 # Pod template (optional) template: 41 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" connectContainer: 42 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" # Tracing configuration (optional) tracing: type: opentelemetry 43 1 The number of replica nodes for the workers that run tasks. 2 Kafka cluster alias for Kafka Connect, which must specify the target Kafka cluster. The Kafka cluster is used by Kafka Connect for its internal topics. 3 Specification for the Kafka clusters being synchronized. 4 Cluster alias for the source Kafka cluster. 5 Authentication for the source cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. 6 Bootstrap address for connection to the source Kafka cluster. The address takes the format <cluster_name>-kafka-bootstrap:<port_number> . The Kafka cluster doesn't need to be managed by Streams for Apache Kafka or deployed to an OpenShift cluster. 7 TLS configuration for encrypted connections to the Kafka cluster, with trusted certificates stored in X.509 format within the specified secrets. 8 Cluster alias for the target Kafka cluster. 9 Authentication for the target Kafka cluster is configured in the same way as for the source Kafka cluster. 10 Bootstrap address for connection to the target Kafka cluster. The address takes the format <cluster_name>-kafka-bootstrap:<port_number> . The Kafka cluster doesn't need to be managed by Streams for Apache Kafka or deployed to an OpenShift cluster. 11 Kafka Connect configuration. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by Streams for Apache Kafka. 12 TLS encryption for the target Kafka cluster is configured in the same way as for the source Kafka cluster. 13 MirrorMaker 2 connectors. 14 Cluster alias for the source cluster used by the MirrorMaker 2 connectors. 15 Cluster alias for the target cluster used by the MirrorMaker 2 connectors. 16 Topic replication from the source cluster defined as a comma-separated list or regular expression pattern. The source connector replicates the specified topics. The checkpoint connector tracks offsets for the specified topics. Here we request three topics by name. 17 Consumer group replication from the source cluster defined as a comma-separated list or regular expression pattern. The checkpoint connector replicates the specified consumer groups. Here we request three consumer groups by name. 18 Configuration for the MirrorSourceConnector that creates remote topics. The config overrides the default configuration options. 19 The maximum number of tasks that the connector may create. Tasks handle the data replication and run in parallel. If the infrastructure supports the processing overhead, increasing this value can improve throughput. Kafka Connect distributes the tasks between members of the cluster. If there are more tasks than workers, workers are assigned multiple tasks. For sink connectors, aim to have one task for each topic partition consumed. For source connectors, the number of tasks that can run in parallel may also depend on the external system. The connector creates fewer than the maximum number of tasks if it cannot achieve the parallelism. 20 Enables automatic restarts of failed connectors and tasks. By default, the number of restarts is indefinite, but you can set a maximum on the number of automatic restarts using the maxRestarts property. 21 Replication factor for mirrored topics created at the target cluster. 22 Replication factor for the MirrorSourceConnector offset-syncs internal topic that maps the offsets of the source and target clusters. 23 When ACL rules synchronization is enabled, ACLs are applied to synchronized topics. The default is true . This feature is not compatible with the User Operator. If you are using the User Operator, set this property to false . 24 Optional setting to change the frequency of checks for new topics. The default is for a check every 10 minutes. 25 Adds a policy that overrides the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name. This optional setting is useful for active/passive backups and data migration. The property must be specified for all connectors. For bidirectional (active/active) replication, use the DefaultReplicationPolicy class to automatically rename remote topics and specify the replication.policy.separator property for all connectors to add a custom separator. 26 Configuration for the MirrorHeartbeatConnector that performs connectivity checks. The config overrides the default configuration options. 27 Replication factor for the heartbeat topic created at the target cluster. 28 Configuration for the MirrorCheckpointConnector that tracks offsets. The config overrides the default configuration options. 29 Replication factor for the checkpoints topic created at the target cluster. 30 Optional setting to change the frequency of checks for new consumer groups. The default is for a check every 10 minutes. 31 Optional setting to synchronize consumer group offsets, which is useful for recovery in an active/passive configuration. Synchronization is not enabled by default. 32 If the synchronization of consumer group offsets is enabled, you can adjust the frequency of the synchronization. 33 Adjusts the frequency of checks for offset tracking. If you change the frequency of offset synchronization, you might also need to adjust the frequency of these checks. 34 The Kafka Connect and MirrorMaker 2 version, which will always be the same. 35 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 36 Specified Kafka Connect loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties or log4j2.properties key in the ConfigMap. For the Kafka Connect log4j.rootLogger logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 37 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 38 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka MirrorMaker. 39 ADVANCED OPTION: Container image configuration, which is recommended only in special situations. 40 SPECIALIZED OPTION: Rack awareness configuration for the deployment. This is a specialized option intended for a deployment within the same location, not across regions. Use this option if you want connectors to consume from the closest replica rather than the leader replica. In certain cases, consuming from the closest replica can improve network utilization or reduce costs . The topologyKey must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standard topology.kubernetes.io/zone label. To consume from the closest replica, enable the RackAwareReplicaSelector in the Kafka broker configuration. 41 Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 42 Environment variables are set for distributed tracing. 43 Distributed tracing is enabled by using OpenTelemetry. 10.10.1. Configuring active/active or active/passive modes You can use MirrorMaker 2 in active/passive or active/active cluster configurations. active/active cluster configuration An active/active configuration has two active clusters replicating data bidirectionally. Applications can use either cluster. Each cluster can provide the same data. In this way, you can make the same data available in different geographical locations. As consumer groups are active in both clusters, consumer offsets for replicated topics are not synchronized back to the source cluster. active/passive cluster configuration An active/passive configuration has an active cluster replicating data to a passive cluster. The passive cluster remains on standby. You might use the passive cluster for data recovery in the event of system failure. The expectation is that producers and consumers connect to active clusters only. A MirrorMaker 2 cluster is required at each target destination. 10.10.1.1. Bidirectional replication (active/active) The MirrorMaker 2 architecture supports bidirectional replication in an active/active cluster configuration. Each cluster replicates the data of the other cluster using the concept of source and remote topics. As the same topics are stored in each cluster, remote topics are automatically renamed by MirrorMaker 2 to represent the source cluster. The name of the originating cluster is prepended to the name of the topic. Figure 10.1. Topic renaming By flagging the originating cluster, topics are not replicated back to that cluster. The concept of replication through remote topics is useful when configuring an architecture that requires data aggregation. Consumers can subscribe to source and remote topics within the same cluster, without the need for a separate aggregation cluster. 10.10.1.2. Unidirectional replication (active/passive) The MirrorMaker 2 architecture supports unidirectional replication in an active/passive cluster configuration. You can use an active/passive cluster configuration to make backups or migrate data to another cluster. In this situation, you might not want automatic renaming of remote topics. You can override automatic renaming by adding IdentityReplicationPolicy to the source connector configuration. With this configuration applied, topics retain their original names. 10.10.2. Configuring MirrorMaker 2 for multiple instances By default, Streams for Apache Kafka configures the group ID and names of the internal topics used by the Kafka Connect framework that MirrorMaker 2 runs on. When running multiple instances of MirrorMaker 2, and they share the same connectCluster value, you must change these default settings using the following config properties: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: connectCluster: "my-cluster-target" clusters: - alias: "my-cluster-target" config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # ... # ... 1 The Kafka Connect cluster group ID within Kafka. 2 Kafka topic that stores connector offsets. 3 Kafka topic that stores connector and task status configurations. 4 Kafka topic that stores connector and task status updates. Note Values for the three topics must be the same for all instances with the same group.id . The connectCluster setting specifies the alias of the target Kafka cluster used by Kafka Connect for its internal topics. As a result, modifications to the connectCluster , group ID, and internal topic naming configuration are specific to the target Kafka cluster. You don't need to make changes if two MirrorMaker 2 instances are using the same source Kafka cluster or in an active-active mode where each MirrorMaker 2 instance has a different connectCluster setting and target cluster. However, if multiple MirrorMaker 2 instances share the same connectCluster , each instance connecting to the same target Kafka cluster is deployed with the same values. In practice, this means all instances form a cluster and use the same internal topics. Multiple instances attempting to use the same internal topics will cause unexpected errors, so you must change the values of these properties for each instance. 10.10.3. Configuring MirrorMaker 2 connectors Use MirrorMaker 2 connector configuration for the internal connectors that orchestrate the synchronization of data between Kafka clusters. MirrorMaker 2 consists of the following connectors: MirrorSourceConnector The source connector replicates topics from a source cluster to a target cluster. It also replicates ACLs and is necessary for the MirrorCheckpointConnector to run. MirrorCheckpointConnector The checkpoint connector periodically tracks offsets. If enabled, it also synchronizes consumer group offsets between the source and target cluster. MirrorHeartbeatConnector The heartbeat connector periodically checks connectivity between the source and target cluster. The following table describes connector properties and the connectors you configure to use them. Table 10.3. MirrorMaker 2 connector configuration properties Property sourceConnector checkpointConnector heartbeatConnector admin.timeout.ms Timeout for admin tasks, such as detecting new topics. Default is 60000 (1 minute). [✓] [✓] [✓] replication.policy.class Policy to define the remote topic naming convention. Default is org.apache.kafka.connect.mirror.DefaultReplicationPolicy . [✓] [✓] [✓] replication.policy.separator The separator used for topic naming in the target cluster. By default, the separator is set to a dot (.). Separator configuration is only applicable to the DefaultReplicationPolicy replication policy class, which defines remote topic names. The IdentityReplicationPolicy class does not use the property as topics retain their original names. [✓] [✓] [✓] consumer.poll.timeout.ms Timeout when polling the source cluster. Default is 1000 (1 second). [✓] [✓] offset-syncs.topic.location The location of the offset-syncs topic, which can be the source (default) or target cluster. [✓] [✓] topic.filter.class Topic filter to select the topics to replicate. Default is org.apache.kafka.connect.mirror.DefaultTopicFilter . [✓] [✓] config.property.filter.class Topic filter to select the topic configuration properties to replicate. Default is org.apache.kafka.connect.mirror.DefaultConfigPropertyFilter . [✓] config.properties.exclude Topic configuration properties that should not be replicated. Supports comma-separated property names and regular expressions. [✓] offset.lag.max Maximum allowable (out-of-sync) offset lag before a remote partition is synchronized. Default is 100 . [✓] offset-syncs.topic.replication.factor Replication factor for the internal offset-syncs topic. Default is 3 . [✓] refresh.topics.enabled Enables check for new topics and partitions. Default is true . [✓] refresh.topics.interval.seconds Frequency of topic refresh. Default is 600 (10 minutes). By default, a check for new topics in the source cluster is made every 10 minutes. You can change the frequency by adding refresh.topics.interval.seconds to the source connector configuration. [✓] replication.factor The replication factor for new topics. Default is 2 . [✓] sync.topic.acls.enabled Enables synchronization of ACLs from the source cluster. Default is true . For more information, see Section 10.10.6, "Synchronizing ACL rules for remote topics" . [✓] sync.topic.acls.interval.seconds Frequency of ACL synchronization. Default is 600 (10 minutes). [✓] sync.topic.configs.enabled Enables synchronization of topic configuration from the source cluster. Default is true . [✓] sync.topic.configs.interval.seconds Frequency of topic configuration synchronization. Default 600 (10 minutes). [✓] checkpoints.topic.replication.factor Replication factor for the internal checkpoints topic. Default is 3 . [✓] emit.checkpoints.enabled Enables synchronization of consumer offsets to the target cluster. Default is true . [✓] emit.checkpoints.interval.seconds Frequency of consumer offset synchronization. Default is 60 (1 minute). [✓] group.filter.class Group filter to select the consumer groups to replicate. Default is org.apache.kafka.connect.mirror.DefaultGroupFilter . [✓] refresh.groups.enabled Enables check for new consumer groups. Default is true . [✓] refresh.groups.interval.seconds Frequency of consumer group refresh. Default is 600 (10 minutes). [✓] sync.group.offsets.enabled Enables synchronization of consumer group offsets to the target cluster __consumer_offsets topic. Default is false . [✓] sync.group.offsets.interval.seconds Frequency of consumer group offset synchronization. Default is 60 (1 minute). [✓] emit.heartbeats.enabled Enables connectivity checks on the target cluster. Default is true . [✓] emit.heartbeats.interval.seconds Frequency of connectivity checks. Default is 1 (1 second). [✓] heartbeats.topic.replication.factor Replication factor for the internal heartbeats topic. Default is 3 . [✓] 10.10.3.1. Changing the location of the consumer group offsets topic MirrorMaker 2 tracks offsets for consumer groups using internal topics. offset-syncs topic The offset-syncs topic maps the source and target offsets for replicated topic partitions from record metadata. checkpoints topic The checkpoints topic maps the last committed offset in the source and target cluster for replicated topic partitions in each consumer group. As they are used internally by MirrorMaker 2, you do not interact directly with these topics. MirrorCheckpointConnector emits checkpoints for offset tracking. Offsets for the checkpoints topic are tracked at predetermined intervals through configuration. Both topics enable replication to be fully restored from the correct offset position on failover. The location of the offset-syncs topic is the source cluster by default. You can use the offset-syncs.topic.location connector configuration to change this to the target cluster. You need read/write access to the cluster that contains the topic. Using the target cluster as the location of the offset-syncs topic allows you to use MirrorMaker 2 even if you have only read access to the source cluster. 10.10.3.2. Synchronizing consumer group offsets The __consumer_offsets topic stores information on committed offsets for each consumer group. Offset synchronization periodically transfers the consumer offsets for the consumer groups of a source cluster into the consumer offsets topic of a target cluster. Offset synchronization is particularly useful in an active/passive configuration. If the active cluster goes down, consumer applications can switch to the passive (standby) cluster and pick up from the last transferred offset position. To use topic offset synchronization, enable the synchronization by adding sync.group.offsets.enabled to the checkpoint connector configuration, and setting the property to true . Synchronization is disabled by default. When using the IdentityReplicationPolicy in the source connector, it also has to be configured in the checkpoint connector configuration. This ensures that the mirrored consumer offsets will be applied for the correct topics. Consumer offsets are only synchronized for consumer groups that are not active in the target cluster. If the consumer groups are in the target cluster, the synchronization cannot be performed and an UNKNOWN_MEMBER_ID error is returned. If enabled, the synchronization of offsets from the source cluster is made periodically. You can change the frequency by adding sync.group.offsets.interval.seconds and emit.checkpoints.interval.seconds to the checkpoint connector configuration. The properties specify the frequency in seconds that the consumer group offsets are synchronized, and the frequency of checkpoints emitted for offset tracking. The default for both properties is 60 seconds. You can also change the frequency of checks for new consumer groups using the refresh.groups.interval.seconds property, which is performed every 10 minutes by default. Because the synchronization is time-based, any switchover by consumers to a passive cluster will likely result in some duplication of messages. Note If you have an application written in Java, you can use the RemoteClusterUtils.java utility to synchronize offsets through the application. The utility fetches remote offsets for a consumer group from the checkpoints topic. 10.10.3.3. Deciding when to use the heartbeat connector The heartbeat connector emits heartbeats to check connectivity between source and target Kafka clusters. An internal heartbeat topic is replicated from the source cluster, which means that the heartbeat connector must be connected to the source cluster. The heartbeat topic is located on the target cluster, which allows it to do the following: Identify all source clusters it is mirroring data from Verify the liveness and latency of the mirroring process This helps to make sure that the process is not stuck or has stopped for any reason. While the heartbeat connector can be a valuable tool for monitoring the mirroring processes between Kafka clusters, it's not always necessary to use it. For example, if your deployment has low network latency or a small number of topics, you might prefer to monitor the mirroring process using log messages or other monitoring tools. If you decide not to use the heartbeat connector, simply omit it from your MirrorMaker 2 configuration. 10.10.3.4. Aligning the configuration of MirrorMaker 2 connectors To ensure that MirrorMaker 2 connectors work properly, make sure to align certain configuration settings across connectors. Specifically, ensure that the following properties have the same value across all applicable connectors: replication.policy.class replication.policy.separator offset-syncs.topic.location topic.filter.class For example, the value for replication.policy.class must be the same for the source, checkpoint, and heartbeat connectors. Mismatched or missing settings cause issues with data replication or offset syncing, so it's essential to keep all relevant connectors configured with the same settings. 10.10.3.5. Listing the offsets of MirrorMaker 2 connectors To list the offset positions of the internal MirrorMaker 2 connectors, use the same configuration that's used to manage Kafka Connect connectors. For more information on setting up the configuration and listing offsets, see Section 10.9.4, "Listing connector offsets" . In this example, the sourceConnector configuration is updated to return the connector offset position. The offset information is written to a specified config map. Example configuration for MirrorMaker 2 connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.9.0 # ... clusters: - alias: "my-cluster-source" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: "my-cluster-target" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: listOffsets: toConfigMap: name: my-connector-offsets # ... You must apply the following annotations to the KafkaMirrorMaker2 resource be able to manage connector offsets: strimzi.io/connector-offsets strimzi.io/mirrormaker-connector The strimzi.io/mirrormaker-connector annotation must be set to the name of the connector. These annotations remain until the operation succeeds or they are manually removed from the resource. MirrorMaker 2 connectors are named using the aliases of the source and target clusters, followed by the connector type: <source_alias>-><target_alias>.<connector_type> . In the following example, the annotations are applied for a connector named my-cluster-source->my-cluster-target.MirrorSourceConnector . Example application of annotations for connector oc annotate kafkamirrormaker2 my-mirror-maker-2 strimzi.io/connector-offsets=list strimzi.io/mirrormaker-connector="my-cluster-source->my-cluster-target.MirrorSourceConnector" -n kafka The offsets are listed in the specified config map. Streams for Apache Kafka puts the offset information into a .json property named after the connector. This does not overwrite any other properties when updating an existing config map. Example source connector offset list apiVersion: v1 kind: ConfigMap metadata: # ... ownerReferences: 1 - apiVersion: kafka.strimzi.io/v1beta2 blockOwnerDeletion: false controller: false kind: KafkaMirrorMaker2 name: my-mirror-maker2 uid: 637e3be7-bd96-43ab-abde-c55b4c4550e0 data: my-cluster-source--my-cluster-target.MirrorSourceConnector.json: |- 2 { "offsets": [ { "partition": { "cluster": "east-kafka", "partition": 0, "topic": "mirrormaker2-cluster-configs" }, "offset": { "offset": 0 } } ] } 1 The owner reference pointing to the KafkaMirrorMaker2 resource. To provide a custom owner reference, create the config map in advance and set the owner reference. 2 The .json property uses the connector name. Since -> characters are not allowed in config map keys, -> is changed to -- in the connector name. Note It is possible to use configuration to alter or reset connector offsets, though this is rarely necessary. 10.10.4. Configuring MirrorMaker 2 connector producers and consumers MirrorMaker 2 connectors use internal producers and consumers. If needed, you can configure these producers and consumers to override the default settings. For example, you can increase the batch.size for the source producer that sends topics to the target Kafka cluster to better accommodate large volumes of messages. Important Producer and consumer configuration options depend on the MirrorMaker 2 implementation, and may be subject to change. The following tables describe the producers and consumers for each of the connectors and where you can add configuration. Table 10.4. Source connector producers and consumers Type Description Configuration Producer Sends topic messages to the target Kafka cluster. Consider tuning the configuration of this producer when it is handling large volumes of data. mirrors.sourceConnector.config: producer.override.* Producer Writes to the offset-syncs topic, which maps the source and target offsets for replicated topic partitions. mirrors.sourceConnector.config: producer.* Consumer Retrieves topic messages from the source Kafka cluster. mirrors.sourceConnector.config: consumer.* Table 10.5. Checkpoint connector producers and consumers Type Description Configuration Producer Emits consumer offset checkpoints. mirrors.checkpointConnector.config: producer.override.* Consumer Loads the offset-syncs topic. mirrors.checkpointConnector.config: consumer.* Note You can set offset-syncs.topic.location to target to use the target Kafka cluster as the location of the offset-syncs topic. Table 10.6. Heartbeat connector producer Type Description Configuration Producer Emits heartbeats. mirrors.heartbeatConnector.config: producer.override.* The following example shows how you configure the producers and consumers. Example configuration for connector producers and consumers apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.9.0 # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: tasksMax: 5 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 producer.request.timeout.ms: 30000 consumer.fetch.max.bytes: 52428800 # ... checkpointConnector: config: producer.override.request.timeout.ms: 30000 consumer.max.poll.interval.ms: 300000 # ... heartbeatConnector: config: producer.override.request.timeout.ms: 30000 # ... 10.10.5. Specifying a maximum number of data replication tasks Connectors create the tasks that are responsible for moving data in and out of Kafka. Each connector comprises one or more tasks that are distributed across a group of worker pods that run the tasks. Increasing the number of tasks can help with performance issues when replicating a large number of partitions or synchronizing the offsets of a large number of consumer groups. Tasks run in parallel. Workers are assigned one or more tasks. A single task is handled by one worker pod, so you don't need more worker pods than tasks. If there are more tasks than workers, workers handle multiple tasks. You can specify the maximum number of connector tasks in your MirrorMaker configuration using the tasksMax property. Without specifying a maximum number of tasks, the default setting is a single task. The heartbeat connector always uses a single task. The number of tasks that are started for the source and checkpoint connectors is the lower value between the maximum number of possible tasks and the value for tasksMax . For the source connector, the maximum number of tasks possible is one for each partition being replicated from the source cluster. For the checkpoint connector, the maximum number of tasks possible is one for each consumer group being replicated from the source cluster. When setting a maximum number of tasks, consider the number of partitions and the hardware resources that support the process. If the infrastructure supports the processing overhead, increasing the number of tasks can improve throughput and latency. For example, adding more tasks reduces the time taken to poll the source cluster when there is a high number of partitions or consumer groups. Increasing the number of tasks for the source connector is useful when you have a large number of partitions. Increasing the number of tasks for the source connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: tasksMax: 10 # ... Increasing the number of tasks for the checkpoint connector is useful when you have a large number of consumer groups. Increasing the number of tasks for the checkpoint connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" checkpointConnector: tasksMax: 10 # ... By default, MirrorMaker 2 checks for new consumer groups every 10 minutes. You can adjust the refresh.groups.interval.seconds configuration to change the frequency. Take care when adjusting lower. More frequent checks can have a negative impact on performance. 10.10.5.1. Checking connector task operations If you are using Prometheus and Grafana to monitor your deployment, you can check MirrorMaker 2 performance. The example MirrorMaker 2 Grafana dashboard provided with Streams for Apache Kafka shows the following metrics related to tasks and latency. The number of tasks Replication latency Offset synchronization latency Additional resources Introducing metrics 10.10.6. Synchronizing ACL rules for remote topics When using MirrorMaker 2 with Streams for Apache Kafka, it is possible to synchronize ACL rules for remote topics. However, this feature is only available if you are not using the User Operator. If you are using type: simple authorization without the User Operator, the ACL rules that manage access to brokers also apply to remote topics. This means that users who have read access to a source topic can also read its remote equivalent. Note OAuth 2.0 authorization does not support access to remote topics in this way. 10.10.7. Securing a Kafka MirrorMaker 2 deployment This procedure describes in outline the configuration required to secure a MirrorMaker 2 deployment. You need separate configuration for the source Kafka cluster and the target Kafka cluster. You also need separate user configuration to provide the credentials required for MirrorMaker to connect to the source and target Kafka clusters. For the Kafka clusters, you specify internal listeners for secure connections within an OpenShift cluster and external listeners for connections outside the OpenShift cluster. You can configure authentication and authorization mechanisms. The security options implemented for the source and target Kafka clusters must be compatible with the security options implemented for MirrorMaker 2. After you have created the cluster and user authentication credentials, you specify them in your MirrorMaker configuration for secure connections. Note In this procedure, the certificates generated by the Cluster Operator are used, but you can replace them by installing your own certificates . You can also configure your listener to use a Kafka listener certificate managed by an external CA (certificate authority) . Before you start Before starting this procedure, take a look at the example configuration files provided by Streams for Apache Kafka. They include examples for securing a deployment of MirrorMaker 2 using mTLS or SCRAM-SHA-512 authentication. The examples specify internal listeners for connecting within an OpenShift cluster. The examples also provide the configuration for full authorization, including the ACLs that allow user operations on the source and target Kafka clusters. When configuring user access to source and target Kafka clusters, ACLs must grant access rights to internal MirrorMaker 2 connectors and read/write access to the cluster group and internal topics used by the underlying Kafka Connect framework in the target cluster. If you've renamed the cluster group or internal topics, such as when configuring MirrorMaker 2 for multiple instances , use those names in the ACLs configuration. Simple authorization uses ACL rules managed by the Kafka AclAuthorizer and StandardAuthorizer plugins to ensure appropriate access levels. For more information on configuring a KafkaUser resource to use simple authorization, see the AclRule schema reference . Prerequisites Streams for Apache Kafka is running Separate namespaces for source and target clusters The procedure assumes that the source and target Kafka clusters are installed to separate namespaces. If you want to use the Topic Operator, you'll need to do this. The Topic Operator only watches a single cluster in a specified namespace. By separating the clusters into namespaces, you will need to copy the cluster secrets so they can be accessed outside the namespace. You need to reference the secrets in the MirrorMaker configuration. Procedure Configure two Kafka resources, one to secure the source Kafka cluster and one to secure the target Kafka cluster. You can add listener configuration for authentication and enable authorization. In this example, an internal listener is configured for a Kafka cluster with TLS encryption and mTLS authentication. Kafka simple authorization is enabled. Example source Kafka cluster configuration with TLS encryption and mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-source-cluster spec: kafka: version: 3.9.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: "3.9" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {} Example target Kafka cluster configuration with TLS encryption and mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-target-cluster spec: kafka: version: 3.9.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: "3.9" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {} Create or update the Kafka resources in separate namespaces. oc apply -f <kafka_configuration_file> -n <namespace> The Cluster Operator creates the listeners and sets up the cluster and client certificate authority (CA) certificates to enable authentication within the Kafka cluster. The certificates are created in the secret <cluster_name> -cluster-ca-cert . Configure two KafkaUser resources, one for a user of the source Kafka cluster and one for a user of the target Kafka cluster. Configure the same authentication and authorization types as the corresponding source and target Kafka cluster. For example, if you used tls authentication and the simple authorization type in the Kafka configuration for the source Kafka cluster, use the same in the KafkaUser configuration. Configure the ACLs needed by MirrorMaker 2 to allow operations on the source and target Kafka clusters. Example source user configuration for mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-source-user labels: strimzi.io/cluster: my-source-cluster spec: authentication: type: tls authorization: type: simple acls: # MirrorSourceConnector - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Create - DescribeConfigs - Read - Write - resource: # Needed for every topic which is mirrored type: topic name: "*" operations: - DescribeConfigs - Read # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: # Needed for every group for which offsets are synced type: group name: "*" operations: - Describe - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Read Example target user configuration for mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-target-user labels: strimzi.io/cluster: my-target-cluster spec: authentication: type: tls authorization: type: simple acls: # cluster group - resource: type: group name: mirrormaker2-cluster operations: - Read # access to config.storage.topic - resource: type: topic name: mirrormaker2-cluster-configs operations: - Create - Describe - DescribeConfigs - Read - Write # access to status.storage.topic - resource: type: topic name: mirrormaker2-cluster-status operations: - Create - Describe - DescribeConfigs - Read - Write # access to offset.storage.topic - resource: type: topic name: mirrormaker2-cluster-offsets operations: - Create - Describe - DescribeConfigs - Read - Write # MirrorSourceConnector - resource: # Needed for every topic which is mirrored type: topic name: "*" operations: - Create - Alter - AlterConfigs - Write # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: type: topic name: my-source-cluster.checkpoints.internal operations: - Create - Describe - Read - Write - resource: # Needed for every group for which the offset is synced type: group name: "*" operations: - Read - Describe # MirrorHeartbeatConnector - resource: type: topic name: heartbeats operations: - Create - Describe - Write Note You can use a certificate issued outside the User Operator by setting type to tls-external . For more information, see the KafkaUserSpec schema reference . Create or update a KafkaUser resource in each of the namespaces you created for the source and target Kafka clusters. oc apply -f <kafka_user_configuration_file> -n <namespace> The User Operator creates the users representing the client (MirrorMaker), and the security credentials used for client authentication, based on the chosen authentication type. The User Operator creates a new secret with the same name as the KafkaUser resource. The secret contains a private and public key for mTLS authentication. The public key is contained in a user certificate, which is signed by the clients CA. Configure a KafkaMirrorMaker2 resource with the authentication details to connect to the source and target Kafka clusters. Example MirrorMaker 2 configuration with TLS encryption and mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker-2 spec: version: 3.9.0 replicas: 1 connectCluster: "my-target-cluster" clusters: - alias: "my-source-cluster" bootstrapServers: my-source-cluster-kafka-bootstrap:9093 tls: 1 trustedCertificates: - secretName: my-source-cluster-cluster-ca-cert pattern: "*.crt" authentication: 2 type: tls certificateAndKey: secretName: my-source-user certificate: user.crt key: user.key - alias: "my-target-cluster" bootstrapServers: my-target-cluster-kafka-bootstrap:9093 tls: 3 trustedCertificates: - secretName: my-target-cluster-cluster-ca-cert pattern: "*.crt" authentication: 4 type: tls certificateAndKey: secretName: my-target-user certificate: user.crt key: user.key config: # -1 means it will use the default replication factor configured in the broker config.storage.replication.factor: -1 offset.storage.replication.factor: -1 status.storage.replication.factor: -1 mirrors: - sourceCluster: "my-source-cluster" targetCluster: "my-target-cluster" sourceConnector: config: replication.factor: 1 offset-syncs.topic.replication.factor: 1 sync.topic.acls.enabled: "false" heartbeatConnector: config: heartbeats.topic.replication.factor: 1 checkpointConnector: config: checkpoints.topic.replication.factor: 1 sync.group.offsets.enabled: "true" topicsPattern: "topic1|topic2|topic3" groupsPattern: "group1|group2|group3" 1 The TLS certificates for the source Kafka cluster. If they are in a separate namespace, copy the cluster secrets from the namespace of the Kafka cluster. 2 The user authentication for accessing the source Kafka cluster using the TLS mechanism. 3 The TLS certificates for the target Kafka cluster. 4 The user authentication for accessing the target Kafka cluster. Create or update the KafkaMirrorMaker2 resource in the same namespace as the target Kafka cluster. oc apply -f <mirrormaker2_configuration_file> -n <namespace_of_target_cluster> 10.10.8. Manually stopping or pausing MirrorMaker 2 connectors If you are using KafkaMirrorMaker2 resources to configure internal MirrorMaker connectors, use the state configuration to either stop or pause a connector. In contrast to the paused state, where the connector and tasks remain instantiated, stopping a connector retains only the configuration, with no active processes. Stopping a connector from running may be more suitable for longer durations than just pausing. While a paused connector is quicker to resume, a stopped connector has the advantages of freeing up memory and resources. Note The state configuration replaces the (deprecated) pause configuration in the KafkaMirrorMaker2ConnectorSpec schema, which allows pauses on connectors. If you were previously using the pause configuration to pause connectors, we encourage you to transition to using the state configuration only to avoid conflicts. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaMirrorMaker2 custom resource that controls the MirrorMaker 2 connector you want to pause or stop: oc get KafkaMirrorMaker2 Edit the KafkaMirrorMaker2 resource to stop or pause the connector. Example configuration for stopping a MirrorMaker 2 connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.9.0 replicas: 3 connectCluster: "my-cluster-target" clusters: # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: tasksMax: 10 autoRestart: enabled: true state: stopped # ... Change the state configuration to stopped or paused . The default state for the connector when this property is not set is running . Apply the changes to the KafkaMirrorMaker2 configuration. You can resume the connector by changing state to running or removing the configuration. Note Alternatively, you can expose the Kafka Connect API and use the stop and pause endpoints to stop a connector from running. For example, PUT /connectors/<connector_name>/stop . You can then use the resume endpoint to restart it. 10.10.9. Manually restarting MirrorMaker 2 connectors Use the strimzi.io/restart-connector annotation to manually trigger a restart of a MirrorMaker 2 connector. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaMirrorMaker2 custom resource that controls the Kafka MirrorMaker 2 connector you want to restart: oc get KafkaMirrorMaker2 Find the name of the Kafka MirrorMaker 2 connector to be restarted from the KafkaMirrorMaker2 custom resource: oc describe KafkaMirrorMaker2 <mirrormaker_cluster_name> Use the name of the connector to restart the connector by annotating the KafkaMirrorMaker2 resource in OpenShift: oc annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> "strimzi.io/restart-connector=<mirrormaker_connector_name>" In this example, connector my-connector in the my-mirror-maker-2 cluster is restarted: oc annotate KafkaMirrorMaker2 my-mirror-maker-2 "strimzi.io/restart-connector=my-connector" Wait for the reconciliation to occur (every two minutes by default). The MirrorMaker 2 connector is restarted, as long as the annotation was detected by the reconciliation process. When MirrorMaker 2 accepts the request, the annotation is removed from the KafkaMirrorMaker2 custom resource. 10.10.10. Manually restarting MirrorMaker 2 connector tasks Use the strimzi.io/restart-connector-task annotation to manually trigger a restart of a MirrorMaker 2 connector. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaMirrorMaker2 custom resource that controls the MirrorMaker 2 connector task you want to restart: oc get KafkaMirrorMaker2 Find the name of the connector and the ID of the task to be restarted from the KafkaMirrorMaker2 custom resource: oc describe KafkaMirrorMaker2 <mirrormaker_cluster_name> Task IDs are non-negative integers, starting from 0. Use the name and ID to restart the connector task by annotating the KafkaMirrorMaker2 resource in OpenShift: oc annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> "strimzi.io/restart-connector-task=<mirrormaker_connector_name>:<task_id>" In this example, task 0 for connector my-connector in the my-mirror-maker-2 cluster is restarted: oc annotate KafkaMirrorMaker2 my-mirror-maker-2 "strimzi.io/restart-connector-task=my-connector:0" Wait for the reconciliation to occur (every two minutes by default). The MirrorMaker 2 connector task is restarted, as long as the annotation was detected by the reconciliation process. When MirrorMaker 2 accepts the request, the annotation is removed from the KafkaMirrorMaker2 custom resource. 10.11. Configuring Kafka MirrorMaker (deprecated) Update the spec properties of the KafkaMirrorMaker custom resource to configure your Kafka MirrorMaker deployment. You can configure access control for producers and consumers using TLS or SASL authentication. This procedure shows a configuration that uses TLS encryption and mTLS authentication on the consumer and producer side. For a deeper understanding of the Kafka MirrorMaker cluster configuration options, refer to the Streams for Apache Kafka Custom Resource API Reference . Important Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, the KafkaMirrorMaker custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated in Streams for Apache Kafka as well. The KafkaMirrorMaker resource will be removed from Streams for Apache Kafka when we adopt Apache Kafka 4.0.0. As a replacement, use the KafkaMirrorMaker2 custom resource with the IdentityReplicationPolicy . Example KafkaMirrorMaker custom resource configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: replicas: 3 1 consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 2 groupId: "my-group" 3 numStreams: 2 4 offsetCommitInterval: 120000 5 tls: 6 trustedCertificates: - secretName: my-source-cluster-ca-cert pattern: "*.crt" authentication: 7 type: tls certificateAndKey: secretName: my-source-secret certificate: public.crt key: private.key config: 8 max.poll.records: 100 receive.buffer.bytes: 32768 producer: bootstrapServers: my-target-cluster-kafka-bootstrap:9092 abortOnSendFailure: false 9 tls: trustedCertificates: - secretName: my-target-cluster-ca-cert pattern: "*.crt" authentication: type: tls certificateAndKey: secretName: my-target-secret certificate: public.crt key: private.key config: compression.type: gzip batch.size: 8192 include: "my-topic|other-topic" 10 resources: 11 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 12 type: inline loggers: mirrormaker.root.logger: INFO readinessProbe: 13 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 14 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 15 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 16 template: 17 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" mirrorMakerContainer: 18 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: 19 type: opentelemetry 1 The number of replica nodes. 2 Bootstrap servers for consumer and producer. 3 Group ID for the consumer. 4 The number of consumer streams. 5 The offset auto-commit interval in milliseconds. 6 TLS configuration for encrypted connections to the Kafka cluster, with trusted certificates stored in X.509 format within the specified secrets. 7 Authentication for consumer or producer, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. 8 Kafka configuration options for consumer and producer. 9 If the abortOnSendFailure property is set to true , Kafka MirrorMaker will exit and the container will restart following a send failure for a message. 10 A list of included topics mirrored from source to target Kafka cluster. 11 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 12 Specified loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties or log4j2.properties key in the ConfigMap. MirrorMaker has a single logger called mirrormaker.root.logger . You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 13 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 14 Prometheus metrics, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter in this example. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key . 15 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka MirrorMaker. 16 ADVANCED OPTION: Container image configuration, which is recommended only in special situations. 17 Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 18 Environment variables are set for distributed tracing. 19 Distributed tracing is enabled by using OpenTelemetry. Warning With the abortOnSendFailure property set to false , the producer attempts to send the message in a topic. The original message might be lost, as there is no attempt to resend a failed message. 10.12. Configuring the Kafka Bridge Update the spec properties of the KafkaBridge custom resource to configure your Kafka Bridge deployment. In order to prevent issues arising when client consumer requests are processed by different Kafka Bridge instances, address-based routing must be employed to ensure that requests are routed to the right Kafka Bridge instance. Additionally, each independent Kafka Bridge instance must have a replica. A Kafka Bridge instance has its own state which is not shared with another instances. For a deeper understanding of the Kafka Bridge and its cluster configuration options, refer to the Using the Kafka Bridge guide and the Streams for Apache Kafka Custom Resource API Reference . Example KafkaBridge custom resource configuration # Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # Replicas (required) replicas: 3 1 # Kafka bootstrap servers (required) bootstrapServers: <cluster_name> -cluster-kafka-bootstrap:9092 2 # HTTP configuration (required) http: 3 port: 8080 # CORS configuration (optional) cors: 4 allowedOrigins: "https://strimzi.io" allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" # Resources requests and limits (recommended) resources: 5 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi # TLS configuration (optional) tls: 6 trustedCertificates: - secretName: my-cluster-cluster-cert pattern: "*.crt" - secretName: my-cluster-cluster-cert certificate: ca2.crt # Authentication (optional) authentication: 7 type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key # Consumer configuration (optional) consumer: 8 config: auto.offset.reset: earliest # Producer configuration (optional) producer: 9 config: delivery.timeout.ms: 300000 # Logging configuration (optional) logging: 10 type: inline loggers: logger.bridge.level: INFO # Enabling DEBUG just for send operation logger.send.name: "http.openapi.operation.send" logger.send.level: DEBUG # JVM options (optional) jvmOptions: 11 "-Xmx": "1g" "-Xms": "1g" # Readiness probe (optional) readinessProbe: 12 initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # Custom image (optional) image: my-org/my-image:latest 13 # Pod template (optional) template: 14 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" bridgeContainer: 15 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" # Tracing configuration (optional) tracing: type: opentelemetry 16 1 The number of replica nodes. 2 Bootstrap address for connection to the target Kafka cluster. The address takes the format <cluster_name>-kafka-bootstrap:<port_number> . The Kafka cluster doesn't need to be managed by Streams for Apache Kafka or deployed to an OpenShift cluster. 3 HTTP access to Kafka brokers. 4 CORS access specifying selected resources and access methods. Additional HTTP headers in requests describe the origins that are permitted access to the Kafka cluster. 5 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 6 TLS configuration for encrypted connections to the Kafka cluster, with trusted certificates stored in X.509 format within the specified secrets. 7 Authentication for the Kafka Bridge cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. By default, the Kafka Bridge connects to Kafka brokers without authentication. 8 Consumer configuration options. 9 Producer configuration options. 10 Specified Kafka Bridge loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties or log4j2.properties key in the ConfigMap. For the Kafka Bridge loggers, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 11 JVM configuration options to optimize performance for the Virtual Machine (VM) running the Kafka Bridge. 12 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 13 Optional: Container image configuration, which is recommended only in special situations. 14 Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 15 Environment variables are set for distributed tracing. 16 Distributed tracing is enabled by using OpenTelemetry. 10.13. Configuring CPU and memory resource limits and requests By default, the Streams for Apache Kafka Cluster Operator does not specify CPU and memory resource requests and limits for its deployed operands. Ensuring an adequate allocation of resources is crucial for maintaining stability and achieving optimal performance in Kafka. The ideal resource allocation depends on your specific requirements and use cases. It is recommended to configure CPU and memory resources for each container by setting appropriate requests and limits . 10.14. Restrictions on OpenShift labels OpenShift labels make it easier to organize, manage, and discover OpenShift resources within your applications. The Cluster Operator is responsible for applying the following OpenShift labels to the operands it deploys. These labels cannot be overridden through template configuration of Streams for Apache Kafka resources: app.kubernetes.io/name : Identifies the component type within Streams for Apache Kafka, such as kafka , zookeeper , and`cruise-control`. app.kubernetes.io/instance : Represents the name of the custom resource to which the operand belongs to. For instance, if a Kafka custom resource is named my-cluster , this label will bear that name on the associated pods. app.kubernetes.io/part-of : Similar to app.kubernetes.io/instance , but prefixed with strimzi- . app.kubernetes.io/managed-by : Defines the application responsible for managing the operand, such as strimzi-cluster-operator or strimzi-user-operator . Example OpenShift labels on a Kafka pod when deploying a Kafka custom resource named my-cluster apiVersion: kafka.strimzi.io/v1beta2 kind: Pod metadata: name: my-cluster-kafka-0 labels: app.kubernetes.io/instance: my-cluster app.kubernetes.io/managed-by: strimzi-cluster-operator app.kubernetes.io/name: kafka app.kubernetes.io/part-of: strimzi-my-cluster spec: # ... 10.15. Configuring pod scheduling To avoid performance degradation caused by resource conflicts between applications scheduled on the same OpenShift node, you can schedule Kafka pods separately from critical workloads. This can be achieved by either selecting specific nodes or dedicating a set of nodes exclusively for Kafka. 10.15.1. Specifying affinity, tolerations, and topology spread constraints Use affinity, tolerations and topology spread constraints to schedule the pods of kafka resources onto nodes. Affinity, tolerations and topology spread constraints are configured using the affinity , tolerations , and topologySpreadConstraint properties in following resources: Kafka.spec.kafka.template.pod Kafka.spec.zookeeper.template.pod Kafka.spec.entityOperator.template.pod KafkaConnect.spec.template.pod KafkaBridge.spec.template.pod KafkaMirrorMaker.spec.template.pod KafkaMirrorMaker2.spec.template.pod The format of the affinity , tolerations , and topologySpreadConstraint properties follows the OpenShift specification. The affinity configuration can include different types of affinity: Pod affinity and anti-affinity Node affinity Additional resources Kubernetes node and pod affinity documentation Kubernetes taints and tolerations Controlling pod placement by using pod topology spread constraints (as described in the Openshift Nodes guide) 10.15.1.1. Use pod anti-affinity to avoid critical applications sharing nodes Use pod anti-affinity to ensure that critical applications are never scheduled on the same disk. When running a Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share nodes with other workloads, such as databases. 10.15.1.2. Use node affinity to schedule workloads onto specific nodes The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of Streams for Apache Kafka components to use the right nodes. OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type or custom labels to select the right node. 10.15.1.3. Use node affinity and tolerations for dedicated nodes Use taints to create dedicated nodes, then schedule Kafka pods on the dedicated nodes by configuring node affinity and tolerations. Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability. 10.15.2. Configuring pod anti-affinity to schedule each Kafka broker on a different worker node Many Kafka brokers or ZooKeeper nodes can run on the same OpenShift worker node. If the worker node fails, they will all become unavailable at the same time. To improve reliability, you can use podAntiAffinity configuration to schedule each Kafka broker or ZooKeeper node on a different OpenShift worker node. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the affinity property in the resource specifying the cluster deployment. To make sure that no worker nodes are shared by Kafka brokers or ZooKeeper nodes, use the strimzi.io/name label. Set the topologyKey to kubernetes.io/hostname to specify that the selected pods are not scheduled on nodes with the same hostname. This will still allow the same worker node to be shared by a single Kafka broker and a single ZooKeeper node. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -kafka topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -zookeeper topologyKey: "kubernetes.io/hostname" # ... Where CLUSTER-NAME is the name of your Kafka custom resource. If you even want to make sure that a Kafka broker and ZooKeeper node do not share the same worker node, use the strimzi.io/cluster label. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: "kubernetes.io/hostname" # ... Where CLUSTER-NAME is the name of your Kafka custom resource. Create or update the resource. oc apply -f <kafka_configuration_file> 10.15.3. Configuring pod anti-affinity in Kafka components Pod anti-affinity configuration helps with the stability and performance of Kafka brokers. By using podAntiAffinity , OpenShift will not schedule Kafka brokers on the same nodes as other workloads. Typically, you want to avoid Kafka running on the same worker node as other network or storage intensive applications such as databases, storage or other messaging platforms. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the affinity property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. The topologyKey should be set to kubernetes.io/hostname to specify that the selected pods should not be scheduled on nodes with the same hostname. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f <kafka_configuration_file> 10.15.4. Configuring node affinity in Kafka components Prerequisites An OpenShift cluster A running Cluster Operator Procedure Label the nodes where Streams for Apache Kafka components should be scheduled. This can be done using oc label : oc label node NAME-OF-NODE node-type=fast-network Alternatively, some of the existing labels might be reused. Edit the affinity property in the resource specifying the cluster deployment. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f <kafka_configuration_file> 10.15.5. Setting up dedicated nodes and scheduling pods on them Prerequisites An OpenShift cluster A running Cluster Operator Procedure Select the nodes which should be used as dedicated. Make sure there are no workloads scheduled on these nodes. Set the taints on the selected nodes: This can be done using oc adm taint : oc adm taint node NAME-OF-NODE dedicated=Kafka:NoSchedule Additionally, add a label to the selected nodes as well. This can be done using oc label : oc label node NAME-OF-NODE dedicated=Kafka Edit the affinity and tolerations properties in the resource specifying the cluster deployment. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: tolerations: - key: "dedicated" operator: "Equal" value: "Kafka" effect: "NoSchedule" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f <kafka_configuration_file> 10.16. Disabling pod disruption budget generation Streams for Apache Kafka generates pod disruption budget resources for Kafka, Kafka Connect worker, MirrorMaker2 worker, and Kafka Bridge worker nodes. If you want to use custom pod disruption budget resources, you can set the STRIMZI_POD_DISRUPTION_BUDGET_GENERATION environment variable to false in the Cluster Operator configuration. For more information, see Section 10.7, "Configuring the Cluster Operator" . 10.17. Configuring logging levels Configure logging levels in the custom resources of Kafka components and Streams for Apache Kafka operators. You can specify the logging levels directly in the spec.logging property of the custom resource. Or you can define the logging properties in a ConfigMap that's referenced in the custom resource using the configMapKeyRef property. The advantages of using a ConfigMap are that the logging properties are maintained in one place and are accessible to more than one resource. You can also reuse the ConfigMap for more than one resource. If you are using a ConfigMap to specify loggers for Streams for Apache Kafka Operators, you can also append the logging specification to add filters. You specify a logging type in your logging specification: inline when specifying logging levels directly external when referencing a ConfigMap Example inline logging configuration # ... logging: type: inline loggers: kafka.root.logger.level: INFO # ... Example external logging configuration # ... logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key # ... Values for the name and key of the ConfigMap are mandatory. Default logging is used if the name or key is not set. 10.17.1. Logging options for Kafka components and operators For more information on configuring logging for specific Kafka components or operators, see the following sections. Kafka component logging Kafka logging ZooKeeper logging Kafka Connect and MirrorMaker 2 logging MirrorMaker logging Kafka Bridge logging Cruise Control logging Operator logging Cluster Operator logging Topic Operator logging User Operator logging 10.17.2. Creating a ConfigMap for logging To use a ConfigMap to define logging properties, you create the ConfigMap and then reference it as part of the logging definition in the spec of a resource. The ConfigMap must contain the appropriate logging configuration. log4j.properties for Kafka components, ZooKeeper, and the Kafka Bridge log4j2.properties for the Topic Operator and User Operator The configuration must be placed under these properties. In this procedure a ConfigMap defines a root logger for a Kafka resource. Procedure Create the ConfigMap. You can create the ConfigMap as a YAML file or from a properties file. ConfigMap example with a root logger definition for Kafka: kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j.properties: kafka.root.logger.level="INFO" If you are using a properties file, specify the file at the command line: oc create configmap logging-configmap --from-file=log4j.properties The properties file defines the logging configuration: # Define the logger kafka.root.logger.level="INFO" # ... Define external logging in the spec of the resource, setting the logging.valueFrom.configMapKeyRef.name to the name of the ConfigMap and logging.valueFrom.configMapKeyRef.key to the key in this ConfigMap. # ... logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j.properties # ... Create or update the resource. oc apply -f <kafka_configuration_file> 10.17.3. Configuring Cluster Operator logging Cluster Operator logging is configured through a ConfigMap named strimzi-cluster-operator . A ConfigMap containing logging configuration is created when installing the Cluster Operator. This ConfigMap is described in the file install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml . You configure Cluster Operator logging by changing the data.log4j2.properties values in this ConfigMap . To update the logging configuration, you can edit the 050-ConfigMap-strimzi-cluster-operator.yaml file and then run the following command: oc create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml Alternatively, edit the ConfigMap directly: oc edit configmap strimzi-cluster-operator With this ConfigMap, you can control various aspects of logging, including the root logger level, log output format, and log levels for different components. The monitorInterval setting, determines how often the logging configuration is reloaded. You can also control the logging levels for the Kafka AdminClient , ZooKeeper ZKTrustManager , Netty, and the OkHttp client. Netty is a framework used in Streams for Apache Kafka for network communication, and OkHttp is a library used for making HTTP requests. If the ConfigMap is missing when the Cluster Operator is deployed, the default logging values are used. If the ConfigMap is accidentally deleted after the Cluster Operator is deployed, the most recently loaded logging configuration is used. Create a new ConfigMap to load a new logging configuration. Note Do not remove the monitorInterval option from the ConfigMap . 10.17.4. Adding logging filters to Streams for Apache Kafka operators If you are using a ConfigMap to configure the (log4j2) logging levels for Streams for Apache Kafka operators, you can also define logging filters to limit what's returned in the log. Logging filters are useful when you have a large number of logging messages. Suppose you set the log level for the logger as DEBUG ( rootLogger.level="DEBUG" ). Logging filters reduce the number of logs returned for the logger at that level, so you can focus on a specific resource. When the filter is set, only log messages matching the filter are logged. Filters use markers to specify what to include in the log. You specify a kind, namespace and name for the marker. For example, if a Kafka cluster is failing, you can isolate the logs by specifying the kind as Kafka , and use the namespace and name of the failing cluster. This example shows a marker filter for a Kafka cluster named my-kafka-cluster . Basic logging filter configuration rootLogger.level="INFO" appender.console.filter.filter1.type=MarkerFilter 1 appender.console.filter.filter1.onMatch=ACCEPT 2 appender.console.filter.filter1.onMismatch=DENY 3 appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) 4 1 The MarkerFilter type compares a specified marker for filtering. 2 The onMatch property accepts the log if the marker matches. 3 The onMismatch property rejects the log if the marker does not match. 4 The marker used for filtering is in the format KIND(NAMESPACE/NAME-OF-RESOURCE) . You can create one or more filters. Here, the log is filtered for two Kafka clusters. Multiple logging filter configuration appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster-1) appender.console.filter.filter2.type=MarkerFilter appender.console.filter.filter2.onMatch=ACCEPT appender.console.filter.filter2.onMismatch=DENY appender.console.filter.filter2.marker=Kafka(my-namespace/my-kafka-cluster-2) Adding filters to the Cluster Operator To add filters to the Cluster Operator, update its logging ConfigMap YAML file ( install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml ). Procedure Update the 050-ConfigMap-strimzi-cluster-operator.yaml file to add the filter properties to the ConfigMap. In this example, the filter properties return logs only for the my-kafka-cluster Kafka cluster: kind: ConfigMap apiVersion: v1 metadata: name: strimzi-cluster-operator data: log4j2.properties: #... appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) Alternatively, edit the ConfigMap directly: oc edit configmap strimzi-cluster-operator If you updated the YAML file instead of editing the ConfigMap directly, apply the changes by deploying the ConfigMap: oc create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml Adding filters to the Topic Operator or User Operator To add filters to the Topic Operator or User Operator, create or edit a logging ConfigMap. In this procedure a logging ConfigMap is created with filters for the Topic Operator. The same approach is used for the User Operator. Procedure Create the ConfigMap. You can create the ConfigMap as a YAML file or from a properties file. In this example, the filter properties return logs only for the my-topic topic: kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j2.properties: rootLogger.level="INFO" appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic) If you are using a properties file, specify the file at the command line: oc create configmap logging-configmap --from-file=log4j2.properties The properties file defines the logging configuration: # Define the logger rootLogger.level="INFO" # Set the filters appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic) # ... Define external logging in the spec of the resource, setting the logging.valueFrom.configMapKeyRef.name to the name of the ConfigMap and logging.valueFrom.configMapKeyRef.key to the key in this ConfigMap. For the Topic Operator, logging is specified in the topicOperator configuration of the Kafka resource. spec: # ... entityOperator: topicOperator: logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j2.properties Apply the changes by deploying the Cluster Operator: create -f install/cluster-operator -n my-cluster-operator-namespace Additional resources Configuring Kafka Cluster Operator logging Topic Operator logging User Operator logging 10.17.5. Lock acquisition warnings for cluster operations The Cluster Operator ensures that only one operation runs at a time for each cluster by using locks. If another operation attempts to start while a lock is held, it waits until the current operation completes. Operations such as cluster creation, rolling updates, scaling down, and scaling up are managed by the Cluster Operator. If acquiring a lock takes longer than the configured timeout ( STRIMZI_OPERATION_TIMEOUT_MS ), a DEBUG message is logged: Example DEBUG message for lock acquisition DEBUG AbstractOperator:406 - Reconciliation #55(timer) Kafka(myproject/my-cluster): Failed to acquire lock lock::myproject::Kafka::my-cluster within 10000ms. Timed-out operations are retried during the periodic reconciliation in intervals defined by STRIMZI_FULL_RECONCILIATION_INTERVAL_MS (by default 120 seconds). If an INFO message continues to appear with the same same reconciliation number, it might indicate a lock release error: Example INFO message for reconciliation INFO AbstractOperator:399 - Reconciliation #1(watch) Kafka(myproject/my-cluster): Reconciliation is in progress Restarting the Cluster Operator can resolve such issues. 10.18. Using ConfigMaps to add configuration Add specific configuration to your Streams for Apache Kafka deployment using ConfigMap resources. ConfigMaps use key-value pairs to store non-confidential data. Configuration data added to ConfigMaps is maintained in one place and can be reused amongst components. ConfigMaps can only store the following types of configuration data: Logging configuration Metrics configuration External configuration for Kafka Connect connectors You can't use ConfigMaps for other areas of configuration. When you configure a component, you can add a reference to a ConfigMap using the configMapKeyRef property. For example, you can use configMapKeyRef to reference a ConfigMap that provides configuration for logging. You might use a ConfigMap to pass a Log4j configuration file. You add the reference to the logging configuration. Example ConfigMap for logging # ... logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key # ... To use a ConfigMap for metrics configuration, you add a reference to the metricsConfig configuration of the component in the same way. template properties allow data from a ConfigMap or Secret to be mounted in a pod as environment variables or volumes. You can use external configuration data for the connectors used by Kafka Connect. The data might be related to an external data source, providing the values needed for the connector to communicate with that data source. For example, you can use the configMapKeyRef property to pass configuration data from a ConfigMap as an environment variable. Example ConfigMap providing environment variable values apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... template: connectContainer: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key If you are using ConfigMaps that are managed externally, use configuration providers to load the data in the ConfigMaps. 10.18.1. Naming custom ConfigMaps Streams for Apache Kafka creates its own ConfigMaps and other resources when it is deployed to OpenShift. The ConfigMaps contain data necessary for running components. The ConfigMaps created by Streams for Apache Kafka must not be edited. Make sure that any custom ConfigMaps you create do not have the same name as these default ConfigMaps. If they have the same name, they will be overwritten. For example, if your ConfigMap has the same name as the ConfigMap for the Kafka cluster, it will be overwritten when there is an update to the Kafka cluster. Additional resources List of Kafka cluster resources (including ConfigMaps) Logging configuration metricsConfig ContainerTemplate schema reference Loading configuration values from external sources 10.19. Loading configuration values from external sources Use configuration providers to load configuration data from external sources. The providers operate independently of Streams for Apache Kafka. You can use them to load configuration data for all Kafka components, including producers and consumers. You reference the external source in the configuration of the component and provide access rights. The provider loads data without needing to restart the Kafka component or extracting files, even when referencing a new external source. For example, use providers to supply the credentials for the Kafka Connect connector configuration. The configuration must include any access rights to the external source. 10.19.1. Enabling configuration providers You can enable one or more configuration providers using the config.providers properties in the spec configuration of a component. Example configuration to enable a configuration provider apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: env config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider # ... KubernetesSecretConfigProvider Loads configuration data from OpenShift secrets. You specify the name of the secret and the key within the secret where the configuration data is stored. This provider is useful for storing sensitive configuration data like passwords or other user credentials. KubernetesConfigMapConfigProvider Loads configuration data from OpenShift config maps. You specify the name of the config map and the key within the config map where the configuration data is stored. This provider is useful for storing non-sensitive configuration data. EnvVarConfigProvider Loads configuration data from environment variables. You specify the name of the environment variable where the configuration data is stored. This provider is useful for configuring applications running in containers, for example, to load certificates or JAAS configuration from environment variables mapped from secrets. FileConfigProvider Loads configuration data from a file. You specify the path to the file where the configuration data is stored. This provider is useful for loading configuration data from files that are mounted into containers. DirectoryConfigProvider Loads configuration data from files within a directory. You specify the path to the directory where the configuration files are stored. This provider is useful for loading multiple configuration files and for organizing configuration data into separate files. To use KubernetesSecretConfigProvider and KubernetesConfigMapConfigProvider , which are part of the OpenShift Configuration Provider plugin, you must set up access rights to the namespace that contains the configuration file. You can use the other providers without setting up access rights. You can supply connector configuration for Kafka Connect or MirrorMaker 2 in this way by doing the following: Mount config maps or secrets into the Kafka Connect pod as environment variables or volumes Enable EnvVarConfigProvider , FileConfigProvider , or DirectoryConfigProvider in the Kafka Connect or MirrorMaker 2 configuration Pass connector configuration using the template property in the spec of the KafkaConnect or KafkaMirrorMaker2 resource Using providers help prevent the passing of restricted information through the Kafka Connect REST interface. You can use this approach in the following scenarios: Mounting environment variables with the values a connector uses to connect and communicate with a data source Mounting a properties file with values that are used to configure Kafka Connect connectors Mounting files in a directory that contains values for the TLS truststore and keystore used by a connector Note A restart is required when using a new Secret or ConfigMap for a connector, which can disrupt other connectors. Additional resources KafkaConnectTemplate schema reference 10.19.2. Loading configuration values from secrets or config maps Use the KubernetesSecretConfigProvider to provide configuration properties from a secret or the KubernetesConfigMapConfigProvider to provide configuration properties from a config map. In this procedure, a config map provides configuration properties for a connector. The properties are specified as key values of the config map. The config map is mounted into the Kafka Connect pod as a volume. Prerequisites A Kafka cluster is running. The Cluster Operator is running. You have a config map containing the connector configuration. Example config map with connector properties apiVersion: v1 kind: ConfigMap metadata: name: my-connector-configuration data: option1: value1 option2: value2 Procedure Configure the KafkaConnect resource. Enable the KubernetesConfigMapConfigProvider The specification shown here can support loading values from config maps and secrets. Example Kafka Connect configuration to use config maps and secrets apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: secrets,configmaps 1 config.providers.configmaps.class: io.strimzi.kafka.KubernetesConfigMapConfigProvider 2 config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider 3 # ... 1 The alias for the configuration provider is used to define other configuration parameters. The provider parameters use the alias from config.providers , taking the form config.providers.USD{alias}.class . 2 KubernetesConfigMapConfigProvider provides values from config maps. 3 KubernetesSecretConfigProvider provides values from secrets. Create or update the resource to enable the provider. oc apply -f <kafka_connect_configuration_file> Create a role that permits access to the values in the external config map. Example role to access values from a config map apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: connector-configuration-role rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["my-connector-configuration"] verbs: ["get"] # ... The rule gives the role permission to access the my-connector-configuration config map. Create a role binding to permit access to the namespace that contains the config map. Example role binding to access the namespace that contains the config map apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: connector-configuration-role-binding subjects: - kind: ServiceAccount name: my-connect-connect namespace: my-project roleRef: kind: Role name: connector-configuration-role apiGroup: rbac.authorization.k8s.io # ... The role binding gives the role permission to access the my-project namespace. The service account must be the same one used by the Kafka Connect deployment. The service account name format is <cluster_name>-connect , where <cluster_name> is the name of the KafkaConnect custom resource. Reference the config map in the connector configuration. Example connector configuration referencing the config map apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # ... config: option: USD{configmaps:my-project/my-connector-configuration:option1} # ... # ... The placeholder structure is configmaps:<path_and_file_name>:<property> . KubernetesConfigMapConfigProvider reads and extracts the option1 property value from the external config map. 10.19.3. Loading configuration values from environment variables Use the EnvVarConfigProvider to provide configuration properties as environment variables. Environment variables can contain values from config maps or secrets. In this procedure, environment variables provide configuration properties for a connector to communicate with Amazon AWS. The connector must be able to read the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The values of the environment variables are derived from a secret mounted into the Kafka Connect pod. Note The names of user-defined environment variables cannot start with KAFKA_ or STRIMZI_ . Prerequisites A Kafka cluster is running. The Cluster Operator is running. You have a secret containing the connector configuration. Example secret with values for environment variables apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE= Procedure Configure the KafkaConnect resource. Enable the EnvVarConfigProvider Specify the environment variables using the template property. Example Kafka Connect configuration to use external environment variables apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: env 1 config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider 2 # ... template: connectContainer: env: - name: AWS_ACCESS_KEY_ID 3 valueFrom: secretKeyRef: name: aws-creds 4 key: awsAccessKey 5 - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey # ... 1 The alias for the configuration provider is used to define other configuration parameters. The provider parameters use the alias from config.providers , taking the form config.providers.USD{alias}.class . 2 EnvVarConfigProvider provides values from environment variables. 3 The environment variable takes a value from the secret. 4 The name of the secret containing the environment variable. 5 The name of the key stored in the secret. Note The secretKeyRef property references keys in a secret. If you are using a config map instead of a secret, use the configMapKeyRef property. Create or update the resource to enable the provider. oc apply -f <kafka_connect_configuration_file> Reference the environment variable in the connector configuration. Example connector configuration referencing the environment variable apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # ... config: option: USD{env:AWS_ACCESS_KEY_ID} option: USD{env:AWS_SECRET_ACCESS_KEY} # ... # ... The placeholder structure is env:<environment_variable_name> . EnvVarConfigProvider reads and extracts the environment variable values from the mounted secret. 10.19.4. Loading configuration values from a file within a directory Use the FileConfigProvider to provide configuration properties from a file within a directory. Files can be stored in config maps or secrets. In this procedure, a file provides configuration properties for a connector. A database name and password are specified as properties of a secret. The secret is mounted to the Kafka Connect pod as a volume. Volumes are mounted on the path /mnt/<volume-name> . Prerequisites A Kafka cluster is running. The Cluster Operator is running. You have a secret containing the connector configuration. Example secret with database properties apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-username 2 dbPassword: my-password 1 The connector configuration in properties file format. 2 Database username and password properties used in the configuration. Procedure Configure the KafkaConnect resource. Enable the FileConfigProvider Specify the additional volume using the template property. Example Kafka Connect configuration to use an external property file apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 #... template: pod: volumes: - name: connector-config-volume 3 secret: secretName: mysecret 4 connectContainer: volumeMounts: - name: connector-config-volume 5 mountPath: /mnt/mysecret 6 1 The alias for the configuration provider is used to define other configuration parameters. 2 FileConfigProvider provides values from properties files. The parameter uses the alias from config.providers , taking the form config.providers.USD{alias}.class . 3 The name of the volume containing the secret. 4 The name of the secret. 5 The name of the mounted volume, which must match the volume name in the volumes list. 6 The path where the secret is mounted, which must start with /mnt/ . Create or update the resource to enable the provider. oc apply -f <kafka_connect_configuration_file> Reference the file properties in the connector configuration as placeholders. Example connector configuration referencing the file apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: database.hostname: 192.168.99.1 database.port: "3306" database.user: "USD{file:/mnt/mysecret/connector.properties:dbUsername}" database.password: "USD{file:/mnt/mysecret/connector.properties:dbPassword}" database.server.id: "184054" #... The placeholder structure is file:<path_and_file_name>:<property> . FileConfigProvider reads and extracts the database username and password property values from the mounted secret. 10.19.5. Loading configuration values from multiple files within a directory Use the DirectoryConfigProvider to provide configuration properties from multiple files within a directory. Files can be config maps or secrets. In this procedure, a secret provides the TLS keystore and truststore user credentials for a connector. The credentials are in separate files. The secrets are mounted into the Kafka Connect pod as volumes. Volumes are mounted on the path /mnt/<volume-name> . Prerequisites A Kafka cluster is running. The Cluster Operator is running. You have a secret containing the user credentials. Example secret with user credentials apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA used to sign this user certificate user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store The my-user secret provides the keystore credentials ( user.crt and user.key ) for the connector. The <cluster_name>-cluster-ca-cert secret generated when deploying the Kafka cluster provides the cluster CA certificate as truststore credentials ( ca.crt ). Procedure Configure the KafkaConnect resource. Enable the DirectoryConfigProvider Specify the additional volume using the template property. Example Kafka Connect configuration to use external property files apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: directory 1 config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider 2 #... template: pod: volumes: - name: my-user-volume 3 secret: secretName: my-user 4 - name: cluster-ca-volume secret: secretName: my-cluster-cluster-ca-cert connectContainer: volumeMounts: - name: my-user-volume 5 mountPath: /mnt/my-user 6 - name: cluster-ca-volume mountPath: /mnt/cluster-ca 1 The alias for the configuration provider is used to define other configuration parameters. 2 DirectoryConfigProvider provides values from files in a directory. The parameter uses the alias from config.providers , taking the form config.providers.USD{alias}.class . 3 The name of the volume containing the secret. 4 The name of the secret. 5 The name of the mounted volume, which must match the volume name in the volumes list. 6 The path where the secret is mounted, which must start with /mnt/ . Create or update the resource to enable the provider. oc apply -f <kafka_connect_configuration_file> Reference the file properties in the connector configuration as placeholders. Example connector configuration referencing the files apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: # ... database.history.producer.security.protocol: SSL database.history.producer.ssl.truststore.type: PEM database.history.producer.ssl.truststore.certificates: "USD{directory:/mtn/cluster-ca:ca.crt}" database.history.producer.ssl.keystore.type: PEM database.history.producer.ssl.keystore.certificate.chain: "USD{directory:/mnt/my-user:user.crt}" database.history.producer.ssl.keystore.key: "USD{directory:/mnt/my-user:user.key}" #... The placeholder structure is directory:<path>:<file_name> . DirectoryConfigProvider reads and extracts the credentials from the mounted secrets. 10.20. Customizing OpenShift resources A Streams for Apache Kafka deployment creates OpenShift resources, such as Deployment , Pod , and Service resources. These resources are managed by Streams for Apache Kafka operators. Only the operator that is responsible for managing a particular OpenShift resource can change that resource. If you try to manually change an operator-managed OpenShift resource, the operator will revert your changes back. Changing an operator-managed OpenShift resource can be useful if you want to perform certain tasks, such as the following: Adding custom labels or annotations that control how Pods are treated by Istio or other services Managing how Loadbalancer -type Services are created by the cluster To make the changes to an OpenShift resource, you can use the template property within the spec section of various Streams for Apache Kafka custom resources. Here is a list of the custom resources where you can apply the changes: Kafka.spec.kafka Kafka.spec.zookeeper Kafka.spec.entityOperator Kafka.spec.kafkaExporter Kafka.spec.cruiseControl KafkaNodePool.spec KafkaConnect.spec KafkaMirrorMaker.spec KafkaMirrorMaker2.spec KafkaBridge.spec KafkaUser.spec For more information about these properties, see the Streams for Apache Kafka Custom Resource API Reference . The Streams for Apache Kafka Custom Resource API Reference provides more details about the customizable fields. In the following example, the template property is used to modify the labels in a Kafka broker's pod. Example template customization apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster labels: app: my-cluster spec: kafka: # ... template: pod: metadata: labels: mylabel: myvalue # ... 10.20.1. Customizing the image pull policy Streams for Apache Kafka allows you to customize the image pull policy for containers in all pods deployed by the Cluster Operator. The image pull policy is configured using the environment variable STRIMZI_IMAGE_PULL_POLICY in the Cluster Operator deployment. The STRIMZI_IMAGE_PULL_POLICY environment variable can be set to three different values: Always Container images are pulled from the registry every time the pod is started or restarted. IfNotPresent Container images are pulled from the registry only when they were not pulled before. Never Container images are never pulled from the registry. Currently, the image pull policy can only be customized for all Kafka, Kafka Connect, and Kafka MirrorMaker clusters at once. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. Additional resources Disruptions . 10.20.2. Applying a termination grace period Apply a termination grace period to give a Kafka cluster enough time to shut down cleanly. Specify the time using the terminationGracePeriodSeconds property. Add the property to the template.pod configuration of the Kafka custom resource. The time you add will depend on the size of your Kafka cluster. The OpenShift default for the termination grace period is 30 seconds. If you observe that your clusters are not shutting down cleanly, you can increase the termination grace period. A termination grace period is applied every time a pod is restarted. The period begins when OpenShift sends a term (termination) signal to the processes running in the pod. The period should reflect the amount of time required to transfer the processes of the terminating pod to another pod before they are stopped. After the period ends, a kill signal stops any processes still running in the pod. The following example adds a termination grace period of 120 seconds to the Kafka custom resource. You can also specify the configuration in the custom resources of other Kafka components. Example termination grace period configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... template: pod: terminationGracePeriodSeconds: 120 # ... # ...
[ "apply -f <kafka_configuration_file>", "examples ├── user 1 ├── topic 2 ├── security 3 │ ├── tls-auth │ ├── scram-sha-512-auth │ └── keycloak-authorization ├── mirror-maker 4 ├── metrics 5 ├── kafka 6 │ └── kraft 7 ├── cruise-control 8 ├── connect 9 └── bridge 10", "Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster Deployment specifications spec: kafka: # Listener configuration (required) listeners: 1 - name: plain 2 port: 9092 3 type: internal 4 tls: false 5 configuration: useServiceDnsDomain: true 6 - name: tls port: 9093 type: internal tls: true authentication: 7 type: tls - name: external1 8 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: 9 secretName: my-secret certificate: my-certificate.crt key: my-key.key # Kafka version (recommended) version: 3.9.0 10 # KRaft metadata version (recommended) metadataVersion: 3.9 11 # Kafka configuration (recommended) config: 12 auto.create.topics.enable: \"false\" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 # Resources requests and limits (recommended) resources: 13 requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\" # Logging configuration (optional) logging: 14 type: inline loggers: kafka.root.logger.level: INFO # Readiness probe (optional) readinessProbe: 15 initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # JVM options (optional) jvmOptions: 16 -Xms: 8192m -Xmx: 8192m # Custom image (optional) image: my-org/my-image:latest 17 # Authorization (optional) authorization: 18 type: simple # Rack awareness (optional) rack: 19 topologyKey: topology.kubernetes.io/zone # Metrics configuration (optional) metricsConfig: 20 type: jmxPrometheusExporter valueFrom: configMapKeyRef: 21 name: my-config-map key: my-key # Entity Operator (recommended) entityOperator: 22 topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 # Resources requests and limits (recommended) resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" # Logging configuration (optional) logging: 23 type: inline loggers: rootLogger.level: INFO userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 # Resources requests and limits (recommended) resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" # Logging configuration (optional) logging: 24 type: inline loggers: rootLogger.level: INFO # Kafka Exporter (optional) kafkaExporter: 25 # # Cruise Control (optional) cruiseControl: 26 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # quotas: type: strimzi producerByteRate: 1000000 1 consumerByteRate: 1000000 2 minAvailableBytesPerVolume: 500000000000 3 excludedPrincipals: 4 - my-user", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # quotas: type: kafka producerByteRate: 1000000 consumerByteRate: 1000000 requestPercentage: 55 1 controllerMutationRate: 50 2", "annotate pod <cluster_name>-kafka-<index_number> strimzi.io/delete-pod-and-pvc=\"true\"", "Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster Deployment specifications spec: # Kafka configuration (required) kafka: # Replicas (required) replicas: 3 # Listener configuration (required) listeners: - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-certificate.crt key: my-key.key # Storage configuration (required) storage: type: persistent-claim size: 10000Gi # Kafka version (recommended) version: 3.9.0 # Kafka configuration (recommended) config: auto.create.topics.enable: \"false\" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: \"3.9\" # Resources requests and limits (recommended) resources: requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\" # Logging configuration (optional) logging: type: inline loggers: kafka.root.logger.level: INFO # Readiness probe (optional) readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # JVM options (optional) jvmOptions: -Xms: 8192m -Xmx: 8192m # Custom image (optional) image: my-org/my-image:latest # Authorization (optional) authorization: type: simple # Rack awareness (optional) rack: topologyKey: topology.kubernetes.io/zone # Metrics configuration (optional) metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key # # ZooKeeper configuration (required) zookeeper: 1 # Replicas (required) replicas: 3 2 # Storage configuration (required) storage: 3 type: persistent-claim size: 1000Gi # Resources requests and limits (recommended) resources: 4 requests: memory: 8Gi cpu: \"2\" limits: memory: 8Gi cpu: \"2\" # Logging configuration (optional) logging: 5 type: inline loggers: zookeeper.root.logger: INFO # JVM options (optional) jvmOptions: 6 -Xms: 4096m -Xmx: 4096m # Metrics configuration (optional) metricsConfig: 7 type: jmxPrometheusExporter valueFrom: configMapKeyRef: 8 name: my-config-map key: my-key # # Entity operator (recommended) entityOperator: topicOperator: # Resources requests and limits (recommended) resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" # Logging configuration (optional) logging: type: inline loggers: rootLogger.level: INFO watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 userOperator: # Resources requests and limits (recommended) resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" # Logging configuration (optional) logging: type: inline loggers: rootLogger.level: INFO watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 # Kafka Exporter (optional) kafkaExporter: # # Cruise Control (optional) cruiseControl: #", "annotate pod <cluster_name>-zookeeper-<index_number> strimzi.io/delete-pod-and-pvc=\"true\"", "Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kraft-dual-role 1 labels: strimzi.io/cluster: my-cluster 2 Node pool specifications spec: # Replicas (required) replicas: 3 3 # Roles (required) roles: 4 - controller - broker # Storage configuration (required) storage: 5 type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # Resources requests and limits (recommended) resources: 6 requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker 1 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: config: reserved.broker.max.id: 10000 #", "annotate kafkanodepool pool-a strimzi.io/next-node-ids=\"[0,1,2,10-20,30]\"", "annotate kafkanodepool pool-b strimzi.io/remove-node-ids=\"[60-50,9,8,7]\"", "annotate kafkanodepool pool-a strimzi.io/next-node-ids-", "annotate kafkanodepool pool-b strimzi.io/remove-node-ids-", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0", "scale kafkanodepool pool-a --replicas=4", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: add-brokers brokers: [3]", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [3]", "scale kafkanodepool pool-a --replicas=3", "NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-0 1/1 Running 0 my-cluster-pool-b-kafka-1 1/1 Running 0 my-cluster-pool-b-kafka-2 1/1 Running 0", "scale kafkanodepool pool-a --replicas=4", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0 my-cluster-pool-a-7 1/1 Running 0 my-cluster-pool-b-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0 my-cluster-pool-b-6 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [6]", "scale kafkanodepool pool-b --replicas=3", "NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-2 1/1 Running 0 my-cluster-pool-b-kafka-3 1/1 Running 0 my-cluster-pool-b-kafka-5 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false #", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [0, 1, 2]", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # --- apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [3, 4, 5]", "delete kafkanodepool pool-b -n <my_cluster_operator_namespace>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kafka labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false", "apply -f <node_pool_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster annotations: strimzi.io/node-pools: enabled spec: kafka: # zookeeper: #", "apply -f <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: my-node-pool labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: ephemeral #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: my-node-pool labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: persistent-claim size: 500Gi deleteClaim: true #", "storage: type: persistent-claim size: 500Gi class: my-storage-class selector: hdd-type: ssd deleteClaim: true", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: my-node-pool labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 2000Gi deleteClaim: false - id: 1 type: persistent-claim size: 2000Gi deleteClaim: false - id: 2 type: persistent-claim size: 2000Gi deleteClaim: false #", "get pv", "NAME CAPACITY CLAIM pvc-0ca459ce-... 2000Gi my-project/data-my-cluster-my-node-pool-2 pvc-6e1810be-... 2000Gi my-project/data-my-cluster-my-node-pool-0 pvc-82dc78c9-... 2000Gi my-project/data-my-cluster-my-node-pool-1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: my-node-pool labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a # spec: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi kraftMetadata: shared deleteClaim: false #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: roles: - broker replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: gp2-ebs #", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: roles: - broker replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 1Ti class: gp3-ebs #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [0, 1, 2]", "delete kafkanodepool pool-a", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: all-zones provisioner: kubernetes.io/my-storage parameters: type: ssd volumeBindingMode: WaitForFirstConsumer", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-1 labels: strimzi.io/cluster: my-cluster spec: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-1 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-2 labels: strimzi.io/cluster: my-cluster spec: replicas: 4 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-2 #", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-pool-zone-1-kafka-0 1/1 Running 0 my-cluster-pool-zone-1-kafka-1 1/1 Running 0 my-cluster-pool-zone-1-kafka-2 1/1 Running 0 my-cluster-pool-zone-2-kafka-3 1/1 Running 0 my-cluster-pool-zone-2-kafka-4 1/1 Running 0 my-cluster-pool-zone-2-kafka-5 1/1 Running 0 my-cluster-pool-zone-2-kafka-6 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: ephemeral zookeeper: storage: type: ephemeral #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: persistent-claim size: 500Gi deleteClaim: true # zookeeper: storage: type: persistent-claim size: 1000Gi", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: storage: type: persistent-claim size: 1000Gi", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: # kafka: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # # zookeeper: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: tieredStorage: type: custom 1 remoteStorageManager: 2 className: com.example.kafka.tiered.storage.s3.S3RemoteStorageManager classPath: /opt/kafka/plugins/tiered-storage-s3/* config: storage.bucket.name: my-bucket 3 # config: rlmm.config.remote.log.metadata.topic.replication.factor: 1 4 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {}", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-user-namespace reconciliationIntervalMs: 60000 resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi #", "env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "env: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2", "env: - name: STRIMZI_LABELS_EXCLUSION_PATTERN value: \"^key1.*\"", "env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2", "env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64", "<cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc. cluster.local", "# env: # - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 #", "# env: # - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: \"120000\" #", "annotate <kind_of_custom_resource> <name_of_custom_resource> strimzi.io/pause-reconciliation=\"true\"", "annotate KafkaConnect my-connect strimzi.io/pause-reconciliation=\"true\"", "describe <kind_of_custom_resource> <name_of_custom_resource>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: \"true\" strimzi.io/use-connector-resources: \"true\" creationTimestamp: 2021-03-12T10:47:11Z # spec: # status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: \"True\" type: ReconciliationPaused", "env: - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "env: - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 3", "spec containers: - name: strimzi-cluster-operator # env: - name: STRIMZI_LEADER_ELECTION_ENABLED value: \"true\" - name: STRIMZI_LEADER_ELECTION_LEASE_NAME value: \"my-strimzi-cluster-operator\" - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi rules: - apiGroups: - coordination.k8s.io resourceNames: - my-strimzi-cluster-operator", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi subjects: - kind: ServiceAccount name: my-strimzi-cluster-operator namespace: myproject", "create -f install/cluster-operator -n myproject", "get deployments -n myproject", "NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 3/3 3 3", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: # - name: \"HTTP_PROXY\" value: \"http://proxy.com\" 1 - name: \"HTTPS_PROXY\" value: \"https://proxy.com\" 2 - name: \"NO_PROXY\" value: \"internal.com, other.domain.com\" 3 #", "edit deployment strimzi-cluster-operator", "create -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: # - name: \"FIPS_MODE\" value: \"disabled\" 1 #", "edit deployment strimzi-cluster-operator", "apply -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml", "Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect 1 metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" 2 Deployment specifications spec: # Replicas (required) replicas: 3 3 # Bootstrap servers (required) bootstrapServers: my-cluster-kafka-bootstrap:9092 4 # Kafka Connect configuration (recommended) config: 5 group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 # Resources requests and limits (recommended) resources: 6 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi # Authentication (optional) authentication: 7 type: tls certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source # TLS configuration (optional) tls: 8 trustedCertificates: - secretName: my-cluster-cluster-cert pattern: \"*.crt\" - secretName: my-cluster-cluster-cert pattern: \"*.crt\" # Build configuration (optional) build: 9 output: 10 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 11 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> # Logging configuration (optional) logging: 12 type: inline loggers: log4j.rootLogger: INFO # Readiness probe (optional) readinessProbe: 13 initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # Metrics configuration (optional) metricsConfig: 14 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key # JVM options (optional) jvmOptions: 15 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" # Custom image (optional) image: my-org/my-image:latest 16 # Rack awareness (optional) rack: topologyKey: topology.kubernetes.io/zone 17 # Pod and container template (optional) template: 18 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 19 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey # Tracing configuration (optional) tracing: type: opentelemetry 20", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # authorization: type: simple acls: # access to offset.storage.topic - resource: type: topic name: connect-cluster-offsets patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # access to status.storage.topic - resource: type: topic name: connect-cluster-status patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # access to config.storage.topic - resource: type: topic name: connect-cluster-configs patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # cluster group - resource: type: group name: connect-cluster patternType: literal operations: - Read host: \"*\"", "apply -f KAFKA-USER-CONFIG-FILE", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 autoRestart: 5 enabled: true config: 6 file: \"/opt/kafka/LICENSE\" 7 topic: my-topic 8 #", "get KafkaConnector", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector tasksMax: 2 config: file: \"/opt/kafka/LICENSE\" topic: my-topic state: stopped #", "get KafkaConnector", "annotate KafkaConnector <kafka_connector_name> strimzi.io/restart=\"true\"", "get KafkaConnector", "describe KafkaConnector <kafka_connector_name>", "annotate KafkaConnector <kafka_connector_name> strimzi.io/restart-task=\"0\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: listOffsets: toConfigMap: 1 name: my-connector-offsets 2 #", "annotate kafkaconnector my-source-connector strimzi.io/connector-offsets=list -n <namespace>", "get configmap my-connector-offsets -n <namespace>", "describe configmap my-connector-offsets -n <namespace>", "apiVersion: v1 kind: ConfigMap metadata: # ownerReferences: 1 - apiVersion: kafka.strimzi.io/v1beta2 blockOwnerDeletion: false controller: false kind: KafkaConnector name: my-source-connector uid: 637e3be7-bd96-43ab-abde-c55b4c4550e0 resourceVersion: \"66951\" uid: 641d60a9-36eb-4f29-9895-8f2c1eb9638e data: offsets.json: |- { \"offsets\" : [ { \"partition\" : { \"filename\" : \"/data/myfile.txt\" 2 }, \"offset\" : { \"position\" : 15295 3 } } ] }", "apiVersion: v1 kind: ConfigMap metadata: # ownerReferences: 1 - apiVersion: kafka.strimzi.io/v1beta2 blockOwnerDeletion: false controller: false kind: KafkaConnector name: my-sink-connector uid: 84a29d7f-77e6-43ac-bfbb-719f9b9a4b3b resourceVersion: \"79241\" uid: 721e30bc-23df-41a2-9b48-fb2b7d9b042c data: offsets.json: |- { \"offsets\": [ { \"partition\": { \"kafka_topic\": \"my-topic\", 2 \"kafka_partition\": 2 3 }, \"offset\": { \"kafka_offset\": 4 4 } } ] }", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: state: stopped 1 alterOffsets: fromConfigMap: 2 name: my-connector-offsets 3 #", "apiVersion: v1 kind: ConfigMap metadata: # data: offsets.json: |- 1 { \"offsets\" : [ { \"partition\" : { \"filename\" : \"/data/myfile.txt\" }, \"offset\" : { \"position\" : 15000 2 } } ] }", "annotate kafkaconnector my-source-connector strimzi.io/connector-offsets=alter -n <namespace>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: state: running #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: # state: stopped 1 #", "annotate kafkaconnector my-source-connector strimzi.io/connector-offsets=reset -n <namespace>", "apiVersion: v1 kind: ConfigMap metadata: # data: offsets.json: |- { \"offsets\" : [] }", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: state: running #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.9.0 connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-source\" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: \"my-cluster-target\" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: {}", "Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 Deployment specifications spec: # Replicas (required) replicas: 3 1 # Connect cluster name (required) connectCluster: \"my-cluster-target\" 2 # Cluster configurations (required) clusters: 3 - alias: \"my-cluster-source\" 4 # Authentication (optional) authentication: 5 certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source type: tls bootstrapServers: my-cluster-source-kafka-bootstrap:9092 6 # TLS configuration (optional) tls: 7 trustedCertificates: - pattern: \"*.crt\" secretName: my-cluster-source-cluster-ca-cert - alias: \"my-cluster-target\" 8 # Authentication (optional) authentication: 9 certificateAndKey: certificate: target.crt key: target.key secretName: my-user-target type: tls bootstrapServers: my-cluster-target-kafka-bootstrap:9092 10 # Kafka Connect configuration (optional) config: 11 config.storage.replication.factor: 1 offset.storage.replication.factor: 1 status.storage.replication.factor: 1 # TLS configuration (optional) tls: 12 trustedCertificates: - pattern: \"*.crt\" secretName: my-cluster-target-cluster-ca-cert # Mirroring configurations (required) mirrors: 13 - sourceCluster: \"my-cluster-source\" 14 targetCluster: \"my-cluster-target\" 15 # Topic and group patterns (required) topicsPattern: \"topic1|topic2|topic3\" 16 groupsPattern: \"group1|group2|group3\" 17 # Source connector configuration (required) sourceConnector: 18 tasksMax: 10 19 autoRestart: 20 enabled: true config: replication.factor: 1 21 offset-syncs.topic.replication.factor: 1 22 sync.topic.acls.enabled: \"false\" 23 refresh.topics.interval.seconds: 60 24 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" 25 # Heartbeat connector configuration (optional) heartbeatConnector: 26 autoRestart: enabled: true config: heartbeats.topic.replication.factor: 1 27 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" # Checkpoint connector configuration (optional) checkpointConnector: 28 autoRestart: enabled: true config: checkpoints.topic.replication.factor: 1 29 refresh.groups.interval.seconds: 600 30 sync.group.offsets.enabled: true 31 sync.group.offsets.interval.seconds: 60 32 emit.checkpoints.interval.seconds: 60 33 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" # Kafka version (recommended) version: 3.9.0 34 # Resources requests and limits (recommended) resources: 35 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi # Logging configuration (optional) logging: 36 type: inline loggers: connect.root.logger.level: INFO # Readiness probe (optional) readinessProbe: 37 initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # JVM options (optional) jvmOptions: 38 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" # Custom image (optional) image: my-org/my-image:latest 39 # Rack awareness (optional) rack: topologyKey: topology.kubernetes.io/zone 40 # Pod template (optional) template: 41 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 42 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" # Tracing configuration (optional) tracing: type: opentelemetry 43", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-target\" config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.9.0 # clusters: - alias: \"my-cluster-source\" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: \"my-cluster-target\" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: listOffsets: toConfigMap: name: my-connector-offsets #", "annotate kafkamirrormaker2 my-mirror-maker-2 strimzi.io/connector-offsets=list strimzi.io/mirrormaker-connector=\"my-cluster-source->my-cluster-target.MirrorSourceConnector\" -n kafka", "apiVersion: v1 kind: ConfigMap metadata: # ownerReferences: 1 - apiVersion: kafka.strimzi.io/v1beta2 blockOwnerDeletion: false controller: false kind: KafkaMirrorMaker2 name: my-mirror-maker2 uid: 637e3be7-bd96-43ab-abde-c55b4c4550e0 data: my-cluster-source--my-cluster-target.MirrorSourceConnector.json: |- 2 { \"offsets\": [ { \"partition\": { \"cluster\": \"east-kafka\", \"partition\": 0, \"topic\": \"mirrormaker2-cluster-configs\" }, \"offset\": { \"offset\": 0 } } ] }", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.9.0 # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 5 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 producer.request.timeout.ms: 30000 consumer.fetch.max.bytes: 52428800 # checkpointConnector: config: producer.override.request.timeout.ms: 30000 consumer.max.poll.interval.ms: 300000 # heartbeatConnector: config: producer.override.request.timeout.ms: 30000 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 10 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" checkpointConnector: tasksMax: 10 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-source-cluster spec: kafka: version: 3.9.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: \"3.9\" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-target-cluster spec: kafka: version: 3.9.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: \"3.9\" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}", "apply -f <kafka_configuration_file> -n <namespace>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-source-user labels: strimzi.io/cluster: my-source-cluster spec: authentication: type: tls authorization: type: simple acls: # MirrorSourceConnector - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Create - DescribeConfigs - Read - Write - resource: # Needed for every topic which is mirrored type: topic name: \"*\" operations: - DescribeConfigs - Read # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: # Needed for every group for which offsets are synced type: group name: \"*\" operations: - Describe - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Read", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-target-user labels: strimzi.io/cluster: my-target-cluster spec: authentication: type: tls authorization: type: simple acls: # cluster group - resource: type: group name: mirrormaker2-cluster operations: - Read # access to config.storage.topic - resource: type: topic name: mirrormaker2-cluster-configs operations: - Create - Describe - DescribeConfigs - Read - Write # access to status.storage.topic - resource: type: topic name: mirrormaker2-cluster-status operations: - Create - Describe - DescribeConfigs - Read - Write # access to offset.storage.topic - resource: type: topic name: mirrormaker2-cluster-offsets operations: - Create - Describe - DescribeConfigs - Read - Write # MirrorSourceConnector - resource: # Needed for every topic which is mirrored type: topic name: \"*\" operations: - Create - Alter - AlterConfigs - Write # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: type: topic name: my-source-cluster.checkpoints.internal operations: - Create - Describe - Read - Write - resource: # Needed for every group for which the offset is synced type: group name: \"*\" operations: - Read - Describe # MirrorHeartbeatConnector - resource: type: topic name: heartbeats operations: - Create - Describe - Write", "apply -f <kafka_user_configuration_file> -n <namespace>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker-2 spec: version: 3.9.0 replicas: 1 connectCluster: \"my-target-cluster\" clusters: - alias: \"my-source-cluster\" bootstrapServers: my-source-cluster-kafka-bootstrap:9093 tls: 1 trustedCertificates: - secretName: my-source-cluster-cluster-ca-cert pattern: \"*.crt\" authentication: 2 type: tls certificateAndKey: secretName: my-source-user certificate: user.crt key: user.key - alias: \"my-target-cluster\" bootstrapServers: my-target-cluster-kafka-bootstrap:9093 tls: 3 trustedCertificates: - secretName: my-target-cluster-cluster-ca-cert pattern: \"*.crt\" authentication: 4 type: tls certificateAndKey: secretName: my-target-user certificate: user.crt key: user.key config: # -1 means it will use the default replication factor configured in the broker config.storage.replication.factor: -1 offset.storage.replication.factor: -1 status.storage.replication.factor: -1 mirrors: - sourceCluster: \"my-source-cluster\" targetCluster: \"my-target-cluster\" sourceConnector: config: replication.factor: 1 offset-syncs.topic.replication.factor: 1 sync.topic.acls.enabled: \"false\" heartbeatConnector: config: heartbeats.topic.replication.factor: 1 checkpointConnector: config: checkpoints.topic.replication.factor: 1 sync.group.offsets.enabled: \"true\" topicsPattern: \"topic1|topic2|topic3\" groupsPattern: \"group1|group2|group3\"", "apply -f <mirrormaker2_configuration_file> -n <namespace_of_target_cluster>", "get KafkaMirrorMaker2", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.9.0 replicas: 3 connectCluster: \"my-cluster-target\" clusters: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 10 autoRestart: enabled: true state: stopped #", "get KafkaMirrorMaker2", "describe KafkaMirrorMaker2 <mirrormaker_cluster_name>", "annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> \"strimzi.io/restart-connector=<mirrormaker_connector_name>\"", "annotate KafkaMirrorMaker2 my-mirror-maker-2 \"strimzi.io/restart-connector=my-connector\"", "get KafkaMirrorMaker2", "describe KafkaMirrorMaker2 <mirrormaker_cluster_name>", "annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> \"strimzi.io/restart-connector-task=<mirrormaker_connector_name>:<task_id>\"", "annotate KafkaMirrorMaker2 my-mirror-maker-2 \"strimzi.io/restart-connector-task=my-connector:0\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: replicas: 3 1 consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 2 groupId: \"my-group\" 3 numStreams: 2 4 offsetCommitInterval: 120000 5 tls: 6 trustedCertificates: - secretName: my-source-cluster-ca-cert pattern: \"*.crt\" authentication: 7 type: tls certificateAndKey: secretName: my-source-secret certificate: public.crt key: private.key config: 8 max.poll.records: 100 receive.buffer.bytes: 32768 producer: bootstrapServers: my-target-cluster-kafka-bootstrap:9092 abortOnSendFailure: false 9 tls: trustedCertificates: - secretName: my-target-cluster-ca-cert pattern: \"*.crt\" authentication: type: tls certificateAndKey: secretName: my-target-secret certificate: public.crt key: private.key config: compression.type: gzip batch.size: 8192 include: \"my-topic|other-topic\" 10 resources: 11 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 12 type: inline loggers: mirrormaker.root.logger: INFO readinessProbe: 13 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 14 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 15 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 16 template: 17 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" mirrorMakerContainer: 18 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: 19 type: opentelemetry", "Basic configuration (required) apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # Replicas (required) replicas: 3 1 # Kafka bootstrap servers (required) bootstrapServers: <cluster_name> -cluster-kafka-bootstrap:9092 2 # HTTP configuration (required) http: 3 port: 8080 # CORS configuration (optional) cors: 4 allowedOrigins: \"https://strimzi.io\" allowedMethods: \"GET,POST,PUT,DELETE,OPTIONS,PATCH\" # Resources requests and limits (recommended) resources: 5 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi # TLS configuration (optional) tls: 6 trustedCertificates: - secretName: my-cluster-cluster-cert pattern: \"*.crt\" - secretName: my-cluster-cluster-cert certificate: ca2.crt # Authentication (optional) authentication: 7 type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key # Consumer configuration (optional) consumer: 8 config: auto.offset.reset: earliest # Producer configuration (optional) producer: 9 config: delivery.timeout.ms: 300000 # Logging configuration (optional) logging: 10 type: inline loggers: logger.bridge.level: INFO # Enabling DEBUG just for send operation logger.send.name: \"http.openapi.operation.send\" logger.send.level: DEBUG # JVM options (optional) jvmOptions: 11 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" # Readiness probe (optional) readinessProbe: 12 initialDelaySeconds: 15 timeoutSeconds: 5 # Liveness probe (optional) livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # Custom image (optional) image: my-org/my-image:latest 13 # Pod template (optional) template: 14 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" bridgeContainer: 15 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" # Tracing configuration (optional) tracing: type: opentelemetry 16", "apiVersion: kafka.strimzi.io/v1beta2 kind: Pod metadata: name: my-cluster-kafka-0 labels: app.kubernetes.io/instance: my-cluster app.kubernetes.io/managed-by: strimzi-cluster-operator app.kubernetes.io/name: kafka app.kubernetes.io/part-of: strimzi-my-cluster spec: #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -kafka topologyKey: \"kubernetes.io/hostname\" # zookeeper: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -zookeeper topologyKey: \"kubernetes.io/hostname\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: \"kubernetes.io/hostname\" # zookeeper: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: \"kubernetes.io/hostname\" #", "apply -f <kafka_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" # zookeeper: #", "apply -f <kafka_configuration_file>", "label node NAME-OF-NODE node-type=fast-network", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # zookeeper: #", "apply -f <kafka_configuration_file>", "adm taint node NAME-OF-NODE dedicated=Kafka:NoSchedule", "label node NAME-OF-NODE dedicated=Kafka", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: tolerations: - key: \"dedicated\" operator: \"Equal\" value: \"Kafka\" effect: \"NoSchedule\" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # zookeeper: #", "apply -f <kafka_configuration_file>", "logging: type: inline loggers: kafka.root.logger.level: INFO", "logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key", "kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j.properties: kafka.root.logger.level=\"INFO\"", "create configmap logging-configmap --from-file=log4j.properties", "Define the logger kafka.root.logger.level=\"INFO\"", "logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j.properties", "apply -f <kafka_configuration_file>", "create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml", "edit configmap strimzi-cluster-operator", "rootLogger.level=\"INFO\" appender.console.filter.filter1.type=MarkerFilter 1 appender.console.filter.filter1.onMatch=ACCEPT 2 appender.console.filter.filter1.onMismatch=DENY 3 appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) 4", "appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster-1) appender.console.filter.filter2.type=MarkerFilter appender.console.filter.filter2.onMatch=ACCEPT appender.console.filter.filter2.onMismatch=DENY appender.console.filter.filter2.marker=Kafka(my-namespace/my-kafka-cluster-2)", "kind: ConfigMap apiVersion: v1 metadata: name: strimzi-cluster-operator data: log4j2.properties: # appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster)", "edit configmap strimzi-cluster-operator", "create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml", "kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j2.properties: rootLogger.level=\"INFO\" appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic)", "create configmap logging-configmap --from-file=log4j2.properties", "Define the logger rootLogger.level=\"INFO\" Set the filters appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic)", "spec: # entityOperator: topicOperator: logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j2.properties", "create -f install/cluster-operator -n my-cluster-operator-namespace", "DEBUG AbstractOperator:406 - Reconciliation #55(timer) Kafka(myproject/my-cluster): Failed to acquire lock lock::myproject::Kafka::my-cluster within 10000ms.", "INFO AbstractOperator:399 - Reconciliation #1(watch) Kafka(myproject/my-cluster): Reconciliation is in progress", "logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # template: connectContainer: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: env config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider #", "apiVersion: v1 kind: ConfigMap metadata: name: my-connector-configuration data: option1: value1 option2: value2", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: secrets,configmaps 1 config.providers.configmaps.class: io.strimzi.kafka.KubernetesConfigMapConfigProvider 2 config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider 3 #", "apply -f <kafka_connect_configuration_file>", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: connector-configuration-role rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"my-connector-configuration\"] verbs: [\"get\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: connector-configuration-role-binding subjects: - kind: ServiceAccount name: my-connect-connect namespace: my-project roleRef: kind: Role name: connector-configuration-role apiGroup: rbac.authorization.k8s.io", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # config: option: USD{configmaps:my-project/my-connector-configuration:option1} #", "apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: env 1 config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider 2 # template: connectContainer: env: - name: AWS_ACCESS_KEY_ID 3 valueFrom: secretKeyRef: name: aws-creds 4 key: awsAccessKey 5 - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey #", "apply -f <kafka_connect_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # config: option: USD{env:AWS_ACCESS_KEY_ID} option: USD{env:AWS_SECRET_ACCESS_KEY} #", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-username 2 dbPassword: my-password", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 # template: pod: volumes: - name: connector-config-volume 3 secret: secretName: mysecret 4 connectContainer: volumeMounts: - name: connector-config-volume 5 mountPath: /mnt/mysecret 6", "apply -f <kafka_connect_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: database.hostname: 192.168.99.1 database.port: \"3306\" database.user: \"USD{file:/mnt/mysecret/connector.properties:dbUsername}\" database.password: \"USD{file:/mnt/mysecret/connector.properties:dbPassword}\" database.server.id: \"184054\" #", "apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA used to sign this user certificate user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: directory 1 config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider 2 # template: pod: volumes: - name: my-user-volume 3 secret: secretName: my-user 4 - name: cluster-ca-volume secret: secretName: my-cluster-cluster-ca-cert connectContainer: volumeMounts: - name: my-user-volume 5 mountPath: /mnt/my-user 6 - name: cluster-ca-volume mountPath: /mnt/cluster-ca", "apply -f <kafka_connect_configuration_file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: # database.history.producer.security.protocol: SSL database.history.producer.ssl.truststore.type: PEM database.history.producer.ssl.truststore.certificates: \"USD{directory:/mtn/cluster-ca:ca.crt}\" database.history.producer.ssl.keystore.type: PEM database.history.producer.ssl.keystore.certificate.chain: \"USD{directory:/mnt/my-user:user.crt}\" database.history.producer.ssl.keystore.key: \"USD{directory:/mnt/my-user:user.key}\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster labels: app: my-cluster spec: kafka: # template: pod: metadata: labels: mylabel: myvalue #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # template: pod: terminationGracePeriodSeconds: 120 # #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/overview-str
Red Hat Hardware Certification Program Policy Guide
Red Hat Hardware Certification Program Policy Guide Red Hat Hardware Certification 2025 For Use with Red Hat Hardware Certification Red Hat Hardware Certification Team
null
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_hardware_certification_program_policy_guide/index
23.18. Installing Packages
23.18. Installing Packages At this point there is nothing left for you to do until all the packages have been installed. How quickly this happens depends on the number of packages you have selected and your computer's speed. Depending on the available resources, you might see the following progress bar while the installer resolves dependencies of the packages you selected for installation: Figure 23.52. Starting installation During installation of the selected packages and their dependencies, you see the following progress bar: Figure 23.53. Packages completed
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-installpkgs-s390
Chapter 2. Configuring an Azure Stack Hub account
Chapter 2. Configuring an Azure Stack Hub account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 2.1. Azure Stack Hub account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters. The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Description vCPU 56 A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. VNet 1 Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 2 The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Additional resources Optimizing storage . 2.2. Configuring a DNS zone in Azure Stack Hub To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar's DNS zone to Azure Stack Hub, see Microsoft's documentation for Azure Stack Hub datacenter DNS integration . 2.3. Required Azure Stack Hub roles Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use: Owner To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation. 2.4. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Register your environment: USD az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1 1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. See the Microsoft documentation for details. Set the active environment: USD az cloud set -n AzureStackCloud Update your environment configuration to use the specific API version for Azure Stack Hub: USD az cloud update --profile 2019-03-01-hybrid Log in to the Azure CLI: USD az login If you are in a multitenant environment, you must also supply the tenant ID. If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": AzureStackCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 2.5. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on Azure Stack Hub . Install an OpenShift Container Platform cluster on Azure Stack Hub with user-provisioned infrastructure by following Installing a cluster on Azure Stack Hub using ARM templates .
[ "az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1", "az cloud set -n AzureStackCloud", "az cloud update --profile 2019-03-01-hybrid", "az login", "az account list --refresh", "[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_azure_stack_hub/installing-azure-stack-hub-account
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_ibm_z/providing-feedback-on-red-hat-documentation_ibmz