title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.3/making-open-source-more-inclusive
17.4. Problems After Installation
17.4. Problems After Installation 17.4.1. Unable to IPL from *NWSSTG If you are experiencing difficulties when trying to IPL from *NWSSTG, you may not have created a PReP Boot partition set as active.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch17s04
2.6. Using NetworkManager with Network Scripts
2.6. Using NetworkManager with Network Scripts This section describes how to run a script and how to use custom commands in network scripts. The term network scripts refers to the script /etc/init.d/network and any other installed scripts it calls. Although NetworkManager provides the default networking service, scripts and NetworkManager can run in parallel and work together. Red Hat recommends to test them first. Running Network Script Run a network script only with the systemctl command: systemctl start|stop|restart|status network The systemctl utility clears any existing environment variables and ensures correct execution. In Red Hat Enterprise Linux 7, NetworkManager is started first, and /etc/init.d/network checks with NetworkManager to avoid tampering with NetworkManager 's connections. NetworkManager is intended to be the primary application using sysconfig configuration files, and /etc/init.d/network is intended to be secondary. The /etc/init.d/network script runs: manually - using one of the systemctl commands start|stop|restart network , or on boot and shutdown if the network service is enabled - as a result of the systemctl enable network command. It is a manual process and does not react to events that happen after boot. Users can also call the ifup and ifdown scripts manually. Note The systemctl reload network.service command does not work due to technical limitations of initscripts. To apply a new configuration for the network service, use the restart command: This brings down and brings up all the Network Interface Cards (NICs) to load the new configuration. For more information, see the Red Hat Knowledgebase solution Reload and force-reload options for network service . Using Custom Commands in Network Scripts Custom commands in the /sbin/ifup-local , ifdown-pre-local , and ifdown-local scripts are only executed if these devices are controlled by the /etc/init.d/network service. The ifup-local file does not exist by default. If required, create it under the /sbin/ directory. The ifup-local script is readable only by the initscripts and not by NetworkManager . To run a custom script using NetworkManager , create it under the dispatcher.d/ directory. See the section called "Running Dispatcher scripts" . Important Modifying any files included with the initscripts package or related rpms is not recommended. If a user modifies such files, Red Hat does not provide support. Custom tasks can run when network connections go up and down, both with the old network scripts and with NetworkManager . If NetworkManager is enabled, the ifup and ifdown script will ask NetworkManager whether NetworkManager manages the interface in question, which is found from the " DEVICE= " line in the ifcfg file. Devices managed by NetworkManager : calling ifup When you call ifup and the device is managed by NetworkManager , there are two options: If the device is not already connected, then ifup asks NetworkManager to start the connection. If the device is already connected, then nothing to do. calling ifdown When you call ifdown and the device is managed by NetworkManager : ifdown asks NetworkManager to terminate the connection. Devices unmanaged by NetworkManager : If you call either ifup or ifdown , the script starts the connection using the older, non-NetworkManager mechanism that it has used since the time before NetworkManager existed. Running Dispatcher scripts NetworkManager provides a way to run additional custom scripts to start or stop services based on the connection status. By default, the /etc/NetworkManager/dispatcher.d/ directory exists and NetworkManager runs scripts there, in alphabetical order. Each script must be an executable file owned by root and must have write permission only for the file owner. For more information about running NetworkManager dispatcher scripts, see the Red Hat Knowledgebase solution How to write a NetworkManager dispatcher script to apply ethtool commands .
[ "~]# systemctl restart network.service" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Using_NetworkManager_with_Network_Scripts
Chapter 9. Clustering
Chapter 9. Clustering This section covers configuring Red Hat Single Sign-On to run in a cluster. There's a number of things you have to do when setting up a cluster, specifically: Pick an operation mode Configure a shared external database Set up a load balancer Supplying a private network that supports IP multicast Picking an operation mode and configuring a shared database have been discussed earlier in this guide. In this chapter we'll discuss setting up a load balancer and supplying a private network. We'll also discuss some issues that you need to be aware of when booting up a host in the cluster. Note It is possible to cluster Red Hat Single Sign-On without IP Multicast, but this topic is beyond the scope of this guide. For more information, see JGroups chapter of the JBoss EAP Configuration Guide . 9.1. Recommended Network Architecture The recommended network architecture for deploying Red Hat Single Sign-On is to set up an HTTP/HTTPS load balancer on a public IP address that routes requests to Red Hat Single Sign-On servers sitting on a private network. This isolates all clustering connections and provides a nice means of protecting the servers. Note By default, there is nothing to prevent unauthorized nodes from joining the cluster and broadcasting multicast messages. This is why cluster nodes should be in a private network, with a firewall protecting them from outside attacks. 9.2. Clustering Example Red Hat Single Sign-On does come with an out of the box clustering demo that leverages domain mode. Review the Clustered Domain Example chapter for more details. 9.3. Setting Up a Load Balancer or Proxy This section discusses a number of things you need to configure before you can put a reverse proxy or load balancer in front of your clustered Red Hat Single Sign-On deployment. It also covers configuring the built-in load balancer that was Clustered Domain Example . The following diagram illustrates the use of a load balancer. In this example, the load balancer serves as a reverse proxy between three clients and a cluster of three Red Hat Single Sign-On servers. Example Load Balancer Diagram 9.3.1. Identifying Client IP Addresses A few features in Red Hat Single Sign-On rely on the fact that the remote address of the HTTP client connecting to the authentication server is the real IP address of the client machine. Examples include: Event logs - a failed login attempt would be logged with the wrong source IP address SSL required - if the SSL required is set to external (the default) it should require SSL for all external requests Authentication flows - a custom authentication flow that uses the IP address to for example show OTP only for external requests Dynamic Client Registration This can be problematic when you have a reverse proxy or loadbalancer in front of your Red Hat Single Sign-On authentication server. The usual setup is that you have a frontend proxy sitting on a public network that load balances and forwards requests to backend Red Hat Single Sign-On server instances located in a private network. There is some extra configuration you have to do in this scenario so that the actual client IP address is forwarded to and processed by the Red Hat Single Sign-On server instances. Specifically: Configure your reverse proxy or loadbalancer to properly set X-Forwarded-For and X-Forwarded-Proto HTTP headers. Configure your reverse proxy or loadbalancer to preserve the original 'Host' HTTP header. Configure the authentication server to read the client's IP address from X-Forwarded-For header. Configuring your proxy to generate the X-Forwarded-For and X-Forwarded-Proto HTTP headers and preserving the original Host HTTP header is beyond the scope of this guide. Take extra precautions to ensure that the X-Forwarded-For header is set by your proxy. If your proxy isn't configured correctly, then rogue clients can set this header themselves and trick Red Hat Single Sign-On into thinking the client is connecting from a different IP address than it actually is. This becomes really important if you are doing any black or white listing of IP addresses. Beyond the proxy itself, there are a few things you need to configure on the Red Hat Single Sign-On side of things. If your proxy is forwarding requests via the HTTP protocol, then you need to configure Red Hat Single Sign-On to pull the client's IP address from the X-Forwarded-For header rather than from the network packet. To do this, open up the profile configuration file ( standalone.xml , standalone-ha.xml , or domain.xml depending on your operating mode ) and look for the urn:jboss:domain:undertow:10.0 XML block. X-Forwarded-For HTTP Config <subsystem xmlns="urn:jboss:domain:undertow:10.0"> <buffer-cache name="default"/> <server name="default-server"> <ajp-listener name="ajp" socket-binding="ajp"/> <http-listener name="default" socket-binding="http" redirect-socket="https" proxy-address-forwarding="true"/> ... </server> ... </subsystem> Add the proxy-address-forwarding attribute to the http-listener element. Set the value to true . If your proxy is using the AJP protocol instead of HTTP to forward requests (i.e. Apache HTTPD + mod-cluster), then you have to configure things a little differently. Instead of modifying the http-listener , you need to add a filter to pull this information from the AJP packets. X-Forwarded-For AJP Config <subsystem xmlns="urn:jboss:domain:undertow:10.0"> <buffer-cache name="default"/> <server name="default-server"> <ajp-listener name="ajp" socket-binding="ajp"/> <http-listener name="default" socket-binding="http" redirect-socket="https"/> <host name="default-host" alias="localhost"> ... <filter-ref name="proxy-peer"/> </host> </server> ... <filters> ... <filter name="proxy-peer" class-name="io.undertow.server.handlers.ProxyPeerAddressHandler" module="io.undertow.core" /> </filters> </subsystem> 9.3.2. Enable HTTPS/SSL with a Reverse Proxy Assuming that your reverse proxy doesn't use port 8443 for SSL you also need to configure what port HTTPS traffic is redirected to. <subsystem xmlns="urn:jboss:domain:undertow:10.0"> ... <http-listener name="default" socket-binding="http" proxy-address-forwarding="true" redirect-socket="proxy-https"/> ... </subsystem> Add the redirect-socket attribute to the http-listener element. The value should be proxy-https which points to a socket binding you also need to define. Then add a new socket-binding element to the socket-binding-group element: <socket-binding-group name="standard-sockets" default-interface="public" port-offset="USD{jboss.socket.binding.port-offset:0}"> ... <socket-binding name="proxy-https" port="443"/> ... </socket-binding-group> 9.3.3. Verify Configuration You can verify the reverse proxy or load balancer configuration by opening the path /auth/realms/master/.well-known/openid-configuration through the reverse proxy. For example if the reverse proxy address is https://acme.com/ then open the URL https://acme.com/auth/realms/master/.well-known/openid-configuration . This will show a JSON document listing a number of endpoints for Red Hat Single Sign-On. Make sure the endpoints starts with the address (scheme, domain and port) of your reverse proxy or load balancer. By doing this you make sure that Red Hat Single Sign-On is using the correct endpoint. You should also verify that Red Hat Single Sign-On sees the correct source IP address for requests. To check this, you can try to login to the admin console with an invalid username and/or password. This should show a warning in the server log something like this: Check that the value of ipAddress is the IP address of the machine you tried to login with and not the IP address of the reverse proxy or load balancer. 9.3.4. Using the Built-In Load Balancer This section covers configuring the built-in load balancer that is discussed in the Clustered Domain Example . The Clustered Domain Example is only designed to run on one machine. To bring up a slave on another host, you'll need to Edit the domain.xml file to point to your new host slave Copy the server distribution. You don't need the domain.xml , host.xml , or host-master.xml files. Nor do you need the standalone/ directory. Edit the host-slave.xml file to change the bind addresses used or override them on the command line 9.3.4.1. Register a New Host With Load Balancer Let's look first at registering the new host slave with the load balancer configuration in domain.xml . Open this file and go to the undertow configuration in the load-balancer profile. Add a new host definition called remote-host3 within the reverse-proxy XML block. domain.xml reverse-proxy config <subsystem xmlns="urn:jboss:domain:undertow:10.0"> ... <handlers> <reverse-proxy name="lb-handler"> <host name="host1" outbound-socket-binding="remote-host1" scheme="ajp" path="/" instance-id="myroute1"/> <host name="host2" outbound-socket-binding="remote-host2" scheme="ajp" path="/" instance-id="myroute2"/> <host name="remote-host3" outbound-socket-binding="remote-host3" scheme="ajp" path="/" instance-id="myroute3"/> </reverse-proxy> </handlers> ... </subsystem> The output-socket-binding is a logical name pointing to a socket-binding configured later in the domain.xml file. The instance-id attribute must also be unique to the new host as this value is used by a cookie to enable sticky sessions when load balancing. go down to the load-balancer-sockets socket-binding-group and add the outbound-socket-binding for remote-host3 . This new binding needs to point to the host and port of the new host. domain.xml outbound-socket-binding <socket-binding-group name="load-balancer-sockets" default-interface="public"> ... <outbound-socket-binding name="remote-host1"> <remote-destination host="localhost" port="8159"/> </outbound-socket-binding> <outbound-socket-binding name="remote-host2"> <remote-destination host="localhost" port="8259"/> </outbound-socket-binding> <outbound-socket-binding name="remote-host3"> <remote-destination host="192.168.0.5" port="8259"/> </outbound-socket-binding> </socket-binding-group> 9.3.4.2. Master Bind Addresses thing you'll have to do is to change the public and management bind addresses for the master host. Either edit the domain.xml file as discussed in the Bind Addresses chapter or specify these bind addresses on the command line as follows: 9.3.4.3. Host Slave Bind Addresses you'll have to change the public , management , and domain controller bind addresses ( jboss.domain.master-address ). Either edit the host-slave.xml file or specify them on the command line as follows: The values of jboss.bind.address and jboss.bind.address.management pertain to the host slave's IP address. The value of jboss.domain.master.address needs to be the IP address of the domain controller, which is the management address of the master host. 9.3.5. Configuring Other Load Balancers See the load balancing section in the JBoss EAP Configuration Guide for information how to use other software-based load balancers. 9.4. Sticky sessions Typical cluster deployment consists of the load balancer (reverse proxy) and 2 or more Red Hat Single Sign-On servers on private network. For performance purposes, it may be useful if load balancer forwards all requests related to particular browser session to the same Red Hat Single Sign-On backend node. The reason is, that Red Hat Single Sign-On is using Infinispan distributed cache under the covers for save data related to current authentication session and user session. The Infinispan distributed caches are configured with one owner by default. That means that particular session is saved just on one cluster node and the other nodes need to lookup the session remotely if they want to access it. For example if authentication session with ID 123 is saved in the Infinispan cache on node1 , and then node2 needs to lookup this session, it needs to send the request to node1 over the network to return the particular session entity. It is beneficial if particular session entity is always available locally, which can be done with the help of sticky sessions. The workflow in the cluster environment with the public frontend load balancer and two backend Red Hat Single Sign-On nodes can be like this: User sends initial request to see the Red Hat Single Sign-On login screen This request is served by the frontend load balancer, which forwards it to some random node (eg. node1). Strictly said, the node doesn't need to be random, but can be chosen according to some other criterias (client IP address etc). It all depends on the implementation and configuration of underlying load balancer (reverse proxy). Red Hat Single Sign-On creates authentication session with random ID (eg. 123) and saves it to the Infinispan cache. Infinispan distributed cache assigns the primary owner of the session based on the hash of session ID. See Infinispan documentation for more details around this. Let's assume that Infinispan assigned node2 to be the owner of this session. Red Hat Single Sign-On creates the cookie AUTH_SESSION_ID with the format like <session-id>.<owner-node-id> . In our example case, it will be 123.node2 . Response is returned to the user with the Red Hat Single Sign-On login screen and the AUTH_SESSION_ID cookie in the browser From this point, it is beneficial if load balancer forwards all the requests to the node2 as this is the node, who is owner of the authentication session with ID 123 and hence Infinispan can lookup this session locally. After authentication is finished, the authentication session is converted to user session, which will be also saved on node2 because it has same ID 123 . The sticky session is not mandatory for the cluster setup, however it is good for performance for the reasons mentioned above. You need to configure your loadbalancer to sticky over the AUTH_SESSION_ID cookie. How exactly do this is dependent on your loadbalancer. It is recommended on the Red Hat Single Sign-On side to use the system property jboss.node.name during startup, with the value corresponding to the name of your route. For example, -Djboss.node.name=node1 will use node1 to identify the route. This route will be used by Infinispan caches and will be attached to the AUTH_SESSION_ID cookie when the node is the owner of the particular key. Here is an example of the start up command using this system property: Typically in production environment the route name should use the same name as your backend host, but it is not required. You can use a different route name. For example, if you want to hide the host name of your Red Hat Single Sign-On server inside your private network. 9.4.1. Disable adding the route Some load balancers can be configured to add the route information by themselves instead of relying on the back end Red Hat Single Sign-On node. However, as described above, adding the route by the Red Hat Single Sign-On is recommended. This is because when done this way performance improves, since Red Hat Single Sign-On is aware of the entity that is the owner of particular session and can route to that node, which is not necessarily the local node. You are permitted to disable adding route information to the AUTH_SESSION_ID cookie by Red Hat Single Sign-On, if you prefer, by adding the following into your RHSSO_HOME/standalone/configuration/standalone-ha.xml file in the Red Hat Single Sign-On subsystem configuration: <subsystem xmlns="urn:jboss:domain:keycloak-server:1.1"> ... <spi name="stickySessionEncoder"> <provider name="infinispan" enabled="true"> <properties> <property name="shouldAttachRoute" value="false"/> </properties> </provider> </spi> </subsystem> 9.5. Multicast Network Setup Out of the box clustering support needs IP Multicast. Multicast is a network broadcast protocol. This protocol is used at boot time to discover and join the cluster. It is also used to broadcast messages for the replication and invalidation of distributed caches used by Red Hat Single Sign-On. The clustering subsystem for Red Hat Single Sign-On runs on the JGroups stack. Out of the box, the bind addresses for clustering are bound to a private network interface with 127.0.0.1 as default IP address. You have to edit your the standalone-ha.xml or domain.xml sections discussed in the Bind Address chapter. private network config <interfaces> ... <interface name="private"> <inet-address value="USD{jboss.bind.address.private:127.0.0.1}"/> </interface> </interfaces> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="USD{jboss.socket.binding.port-offset:0}"> ... <socket-binding name="jgroups-mping" interface="private" port="0" multicast-address="USD{jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/> <socket-binding name="jgroups-tcp" interface="private" port="7600"/> <socket-binding name="jgroups-tcp-fd" interface="private" port="57600"/> <socket-binding name="jgroups-udp" interface="private" port="55200" multicast-address="USD{jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/> <socket-binding name="jgroups-udp-fd" interface="private" port="54200"/> <socket-binding name="modcluster" port="0" multicast-address="224.0.1.105" multicast-port="23364"/> ... </socket-binding-group> Things you'll want to configure are the jboss.bind.address.private and jboss.default.multicast.address as well as the ports of the services on the clustering stack. Note It is possible to cluster Red Hat Single Sign-On without IP Multicast, but this topic is beyond the scope of this guide. For more information, see JGroups in the JBoss EAP Configuration Guide . 9.6. Securing Cluster Communication When cluster nodes are isolated on a private network it requires access to the private network to be able to join a cluster or to view communication in the cluster. In addition you can also enable authentication and encryption for cluster communication. As long as your private network is secure it is not necessary to enable authentication and encryption. Red Hat Single Sign-On does not send very sensitive information on the cluster in either case. If you want to enable authentication and encryption for clustering communication, see Securing a Cluster in the JBoss EAP Configuration Guide . 9.7. Serialized Cluster Startup Red Hat Single Sign-On cluster nodes are allowed to boot concurrently. When Red Hat Single Sign-On server instance boots up it may do some database migration, importing, or first time initializations. A DB lock is used to prevent start actions from conflicting with one another when cluster nodes boot up concurrently. By default, the maximum timeout for this lock is 900 seconds. If a node is waiting on this lock for more than the timeout it will fail to boot. Typically you won't need to increase/decrease the default value, but just in case it's possible to configure it in standalone.xml , standalone-ha.xml , or domain.xml file in your distribution. The location of this file depends on your operating mode . <spi name="dblock"> <provider name="jpa" enabled="true"> <properties> <property name="lockWaitTimeout" value="900"/> </properties> </provider> </spi> 9.8. Booting the Cluster Booting Red Hat Single Sign-On in a cluster depends on your operating mode Standalone Mode Domain Mode You may need to use additional parameters or system properties. For example, the parameter -b for the binding host or the system property jboss.node.name to specify the name of the route, as described in Sticky Sessions section. 9.9. Troubleshooting Note that when you run a cluster, you should see message similar to this in the log of both cluster nodes: If you see just one node mentioned, it's possible that your cluster hosts are not joined together. Usually it's best practice to have your cluster nodes on private network without firewall for communication among them. Firewall could be enabled just on public access point to your network instead. If for some reason you still need to have firewall enabled on cluster nodes, you will need to open some ports. Default values are UDP port 55200 and multicast port 45688 with multicast address 230.0.0.4. Note that you may need more ports opened if you want to enable additional features like diagnostics for your JGroups stack. Red Hat Single Sign-On delegates most of the clustering work to Infinispan/JGroups. For more information, see JGroups in the JBoss EAP Configuration Guide . If you are interested in failover support (high availability), evictions, expiration and cache tuning, see Chapter 10, Server Cache Configuration .
[ "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <ajp-listener name=\"ajp\" socket-binding=\"ajp\"/> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" proxy-address-forwarding=\"true\"/> </server> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <ajp-listener name=\"ajp\" socket-binding=\"ajp\"/> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\"/> <host name=\"default-host\" alias=\"localhost\"> <filter-ref name=\"proxy-peer\"/> </host> </server> <filters> <filter name=\"proxy-peer\" class-name=\"io.undertow.server.handlers.ProxyPeerAddressHandler\" module=\"io.undertow.core\" /> </filters> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\"> <http-listener name=\"default\" socket-binding=\"http\" proxy-address-forwarding=\"true\" redirect-socket=\"proxy-https\"/> </subsystem>", "<socket-binding-group name=\"standard-sockets\" default-interface=\"public\" port-offset=\"USD{jboss.socket.binding.port-offset:0}\"> <socket-binding name=\"proxy-https\" port=\"443\"/> </socket-binding-group>", "08:14:21,287 WARN XNIO-1 task-45 [org.keycloak.events] type=LOGIN_ERROR, realmId=master, clientId=security-admin-console, userId=8f20d7ba-4974-4811-a695-242c8fbd1bf8, ipAddress=X.X.X.X, error=invalid_user_credentials, auth_method=openid-connect, auth_type=code, redirect_uri=http://localhost:8080/auth/admin/master/console/?redirect_fragment=%2Frealms%2Fmaster%2Fevents-settings, code_id=a3d48b67-a439-4546-b992-e93311d6493e, username=admin", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\"> <handlers> <reverse-proxy name=\"lb-handler\"> <host name=\"host1\" outbound-socket-binding=\"remote-host1\" scheme=\"ajp\" path=\"/\" instance-id=\"myroute1\"/> <host name=\"host2\" outbound-socket-binding=\"remote-host2\" scheme=\"ajp\" path=\"/\" instance-id=\"myroute2\"/> <host name=\"remote-host3\" outbound-socket-binding=\"remote-host3\" scheme=\"ajp\" path=\"/\" instance-id=\"myroute3\"/> </reverse-proxy> </handlers> </subsystem>", "<socket-binding-group name=\"load-balancer-sockets\" default-interface=\"public\"> <outbound-socket-binding name=\"remote-host1\"> <remote-destination host=\"localhost\" port=\"8159\"/> </outbound-socket-binding> <outbound-socket-binding name=\"remote-host2\"> <remote-destination host=\"localhost\" port=\"8259\"/> </outbound-socket-binding> <outbound-socket-binding name=\"remote-host3\"> <remote-destination host=\"192.168.0.5\" port=\"8259\"/> </outbound-socket-binding> </socket-binding-group>", "domain.sh --host-config=host-master.xml -Djboss.bind.address=192.168.0.2 -Djboss.bind.address.management=192.168.0.2", "domain.sh --host-config=host-slave.xml -Djboss.bind.address=192.168.0.5 -Djboss.bind.address.management=192.168.0.5 -Djboss.domain.master.address=192.168.0.2", "cd USDRHSSO_NODE1 ./standalone.sh -c standalone-ha.xml -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node1", "<subsystem xmlns=\"urn:jboss:domain:keycloak-server:1.1\"> <spi name=\"stickySessionEncoder\"> <provider name=\"infinispan\" enabled=\"true\"> <properties> <property name=\"shouldAttachRoute\" value=\"false\"/> </properties> </provider> </spi> </subsystem>", "<interfaces> <interface name=\"private\"> <inet-address value=\"USD{jboss.bind.address.private:127.0.0.1}\"/> </interface> </interfaces> <socket-binding-group name=\"standard-sockets\" default-interface=\"public\" port-offset=\"USD{jboss.socket.binding.port-offset:0}\"> <socket-binding name=\"jgroups-mping\" interface=\"private\" port=\"0\" multicast-address=\"USD{jboss.default.multicast.address:230.0.0.4}\" multicast-port=\"45700\"/> <socket-binding name=\"jgroups-tcp\" interface=\"private\" port=\"7600\"/> <socket-binding name=\"jgroups-tcp-fd\" interface=\"private\" port=\"57600\"/> <socket-binding name=\"jgroups-udp\" interface=\"private\" port=\"55200\" multicast-address=\"USD{jboss.default.multicast.address:230.0.0.4}\" multicast-port=\"45688\"/> <socket-binding name=\"jgroups-udp-fd\" interface=\"private\" port=\"54200\"/> <socket-binding name=\"modcluster\" port=\"0\" multicast-address=\"224.0.1.105\" multicast-port=\"23364\"/> </socket-binding-group>", "<spi name=\"dblock\"> <provider name=\"jpa\" enabled=\"true\"> <properties> <property name=\"lockWaitTimeout\" value=\"900\"/> </properties> </provider> </spi>", "bin/standalone.sh --server-config=standalone-ha.xml", "bin/domain.sh --host-config=host-master.xml bin/domain.sh --host-config=host-slave.xml", "INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-10,shared=udp) ISPN000094: Received new cluster view: [node1/keycloak|1] (2) [node1/keycloak, node2/keycloak]" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_installation_and_configuration_guide/clustering
Chapter 1. Compiling your Red Hat build of Quarkus applications to native executables
Chapter 1. Compiling your Red Hat build of Quarkus applications to native executables As an application developer, you can use Red Hat build of Quarkus 3.2 to create microservices written in Java that run on OpenShift Container Platform and serverless environments. Quarkus applications can run as regular Java applications (on top of a Java Virtual Machine), or be compiled into native executables. Applications compiled to native executables have a smaller memory footprint and faster startup times than their Java counterpart. This guide shows you how to compile the Red Hat build of Quarkus 3.2 Getting Started project into a native executable and how to configure and test the native executable. You will need the application that you created earlier in Getting started with Red Hat build of Quarkus . Building a native executable with Red Hat build of Quarkus covers: Building a native executable with a single command by using a container runtime such as Podman or Docker Creating a custom container image using the produced native executable Creating a container image using the OpenShift Container Platform Docker build strategy Deploying the Quarkus native application to OpenShift Container Platform Configuring the native executable Testing the native executable Prerequisites Have OpenJDK 17 installed and the JAVA_HOME environment variable set to specify the location of the Java SDK. Log in to the Red Hat Customer Portal to download Red Hat build of OpenJDK from the Software Downloads page. An Open Container Initiative (OCI) compatible container runtime, such as Podman or Docker. A completed Quarkus Getting Started project. To learn how to build the Quarkus Getting Started project, see Getting started with Quarkus . Alternatively, you can download the Quarkus quickstart archive or clone the Quarkus Quickstarts Git repository. The sample project is in the getting-started directory. 1.1. Producing a native executable A native binary is an executable that is created to run on a specific operating system and CPU architecture. The following list outlines some examples of a native executable: An ELF binary for Linux AMD 64 bits An EXE binary for Windows AMD 64 bits An ELF binary for ARM 64 bits When you build a native executable, one advantage is that your application and dependencies, including the JVM, are packaged into a single file. The native executable for your application contains the following items: The compiled application code. The required Java libraries. A reduced version of the Java virtual machine (JVM) for improved application startup times and minimal disk and memory footprint, which is also tailored for the application code and its dependencies. To produce a native executable from your Quarkus application, you can select either an in-container build or a local-host build. The following table explains the different building options that you can use: Table 1.1. Building options for producing a native executable Building option Requires Uses Results in Benefits In-container build - Supported A container runtime, for example, Podman or Docker The default registry.access.redhat.com/quarkus/mandrel-23-rhel8:23.0 builder image A Linux 64-bit executable using the CPU architecture of the host GraalVM does not need to be set up locally, which makes your CI pipelines run more efficiently Local-host build - Only supported upstream A local installation of GraalVM or Mandrel Its local installation as a default for the quarkus.native.builder-image property An executable that has the same operating system and CPU architecture as the machine on which the build is executed An alternative for developers that are not allowed or do not want to use tools such as Docker or Podman. Overall, it is faster than the in-container build approach. Important Red Hat build of Quarkus 3.2 only supports the building of native Linux executables by using a Java 17-based Red Hat build of Quarkus Native builder image, which is a productized distribution of Mandrel . While other images are available in the community, they are not supported in the product, so you should not use them for production builds that you want Red Hat to provide support for. Applications whose source is written based on Java 11, with no Java 12 - 17 features used, can still compile a native executable of that application using the Java 17-based Mandrel 23.0 base image. Building native executables by using Oracle GraalVM Community Edition (CE), Mandrel community edition, or any other distributions of GraalVM is not supported for Red Hat build of Quarkus. 1.1.1. Producing a native executable by using an in-container build To create a native executable and run the native image tests, use the native profile that is provided by Red Hat build of Quarkus for an in-container build. Prerequisites Podman or Docker is installed. The container has access to at least 8GB of memory. Procedure Open the Getting Started project pom.xml file, and verify that the project includes the native profile: <profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <properties> <skipITs>false</skipITs> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles> Build a native executable by using one of the following ways: Using Maven: For Docker: ./mvnw package -Dnative -Dquarkus.native.container-build=true For Podman: ./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman Using the Quarkus CLI: For Docker: quarkus build --native -Dquarkus.native.container-build=true For Podman: quarkus build --native -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman Step results These commands create a *-runner binary in the target directory, where the following applies: The *-runner file is the built native binary produced by Quarkus. The target directory is a directory that Maven creates when you build a Maven application. Important Compiling a Quarkus application to a native executable consumes a large amount of memory during analysis and optimization. You can limit the amount of memory used during native compilation by setting the quarkus.native.native-image-xmx configuration property. Setting low memory limits might increase the build time. To run the native executable, enter the following command: ./target/*-runner Additional resources Native executable configuration properties 1.1.2. Producing a native executable by using a local-host build If you are not using Docker or Podman, use the Quarkus local-host build option to create and run a native executable. Using the local-host build approach is faster than using containers and is suitable for machines that use a Linux operating system. Important Using the following procedure in production is not supported by Red Hat build of Quarkus. Use this method only when testing or as a backup approach when Docker or Podman is not available. Prerequisites A local installation of Mandrel or GraalVm, correctly configured according to the Building a native executable guide. Additionally, for a GraalVM installation, native-image must also be installed. Procedure For GraalVM or Mandrel, build a native executable by using one of the following ways: Using Maven: ./mvnw package -Dnative Using the Quarkus CLI: quarkus build --native Step results These commands create a *-runner binary in the target directory, where the following applies: The *-runner file is the built native binary produced by Quarkus. The target directory is a directory that Maven creates when you build a Maven application. Note When you build the native executable, the prod profile is enabled unless modified in the quarkus.profile property. Run the native executable: ./target/*-runner Additional resources For more information, see the Producing a native executable section of the "Building a native executable" guide in the Quarkus community. 1.2. Creating a custom container image You can create a container image from your Quarkus application using one of the following methods: Creating a container manually Creating a container by using the OpenShift Container Platform Docker build Important Compiling a Quarkus application to a native executable consumes a large amount of memory during analysis and optimization. You can limit the amount of memory used during native compilation by setting the quarkus.native.native-image-xmx configuration property. Setting low memory limits might increase the build time. 1.2.1. Creating a container manually This section shows you how to manually create a container image with your application for Linux AMD64. When you produce a native image by using the Quarkus Native container, the native image creates an executable that targets Linux AMD64. If your host operating system is different from Linux AMD64, you cannot run the binary directly and you need to create a container manually. Your Quarkus Getting Started project includes a Dockerfile.native in the src/main/docker directory with the following content: FROM registry.access.redhat.com/ubi8/ubi-minimal:8.8 WORKDIR /work/ RUN chown 1001 /work \ && chmod "g+rwX" /work \ && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 ENTRYPOINT ["./application", "-Dquarkus.http.host=0.0.0.0"] Note Universal Base Image (UBI) The following list displays the suitable images for use with Dockerfiles. Red Hat Universal Base Image 8 (UBI8). This base image is designed and engineered to be the base layer for all of your containerized applications, middleware, and utilities. Red Hat Universal Base Image 8 Minimal (UBI8-minimal). A stripped-down UBI8 image that uses microdnf as a package manager. All Red Hat Base images are available on the Container images catalog site. Procedure Build a native Linux executable by using one of the following methods: Docker: ./mvnw package -Dnative -Dquarkus.native.container-build=true Podman: ./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman Build the container image by using one of the following methods: Docker: docker build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started . Podman podman build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started . Run the container by using one of the following methods: Docker: docker run -i --rm -p 8080:8080 quarkus-quickstart/getting-started Podman: podman run -i --rm -p 8080:8080 quarkus-quickstart/getting-started 1.2.2. Creating a container by using the OpenShift Docker build You can create a container image for your Quarkus application by using the OpenShift Container Platform Docker build strategy. This strategy creates a container image by using a build configuration in the cluster. Prerequisites You have access to an OpenShift Container Platform cluster and the latest version of the oc tool installed. For information about installing oc , see Installing the CLI in the Installing and configuring OpenShift Container Platform clusters guide. A URL for the OpenShift Container Platform API endpoint. Procedure Log in to the OpenShift CLI: oc login -u <username_url> Create a new project in OpenShift: oc new-project <project_name> Create a build config based on the src/main/docker/Dockerfile.native file: cat src/main/docker/Dockerfile.native | oc new-build --name <build_name> --strategy=docker --dockerfile - Build the project: oc start-build <build_name> --from-dir . Deploy the project to OpenShift Container Platform: oc new-app <build_name> Expose the services: oc expose svc/ <build_name> 1.3. Native executable configuration properties Configuration properties define how the native executable is generated. You can configure your Quarkus application using the application.properties file. Configuration properties The following table lists the configuration properties that you can set to define how the native executable is generated: Property Description Type Default quarkus.native.debug.enabled If debug is enabled and debug symbols are generated, the symbols are generated in a separate .debug file. boolean false quarkus.native.resources.excludes A comma-separated list of globs to match resource paths that should not be added to the native image. list of strings quarkus.native.additional-build-args Additional arguments to pass to the build process. list of strings quarkus.native.enable-http-url-handler Enables HTTP URL handler. This allows you to do URL.openConnection() for HTTP URLs. boolean true quarkus.native.enable-https-url-handler Enables HTTPS URL handler. This allows you to do URL.openConnection() for HTTPS URLs. boolean false quarkus.native.enable-all-security-services Adds all security services to the native image. boolean false quarkus.native.add-all-charsets Adds all character sets to the native image. This increases the image size. boolean false quarkus.native.graalvm-home Contains the path of the GraalVM distribution. string USD{GRAALVM_HOME:} quarkus.native.java-home Contains the path of the JDK. File USD{java.home} quarkus.native.native-image-xmx The maximum Java heap used to generate the native image. string quarkus.native.debug-build-process Waits for a debugger to attach to the build process before running the native image build. This is an advanced option for those familiar with GraalVM internals. boolean false quarkus.native.publish-debug-build-process-port Publishes the debug port when building with docker if debug-build-process is true . boolean true quarkus.native.cleanup-server Restarts the native image server. boolean false quarkus.native.enable-isolates Enables isolates to improve memory management. boolean true quarkus.native.enable-fallback-images Creates a JVM-based fallback image if the native image fails. boolean false quarkus.native.enable-server Uses the native image server. This can speed up compilation but can result in lost changes due to cache invalidation issues. boolean false quarkus.native.auto-service-loader-registration Automatically registers all META-INF/services entries. boolean false quarkus.native.dump-proxies Dumps the bytecode of all proxies for inspection. boolean false quarkus.native.container-build Builds that use a container runtime. Docker is used by default. boolean false quarkus.native.builder-image The docker image to build the image. string registry.access.redhat.com/quarkus/mandrel-23-rhel8:23.0 quarkus.native.container-runtime The container runtime used to build the image. For example, Docker. string quarkus.native.container-runtime-options Options to pass to the container runtime. list of strings quarkus.native.enable-vm-inspection Enables VM introspection in the image. boolean false quarkus.native.full-stack-traces Enables full stack traces in the image. boolean true quarkus.native.enable-reports Generates reports on call paths and included packages, classes, or methods. boolean false quarkus.native.report-exception-stack-traces Reports exceptions with a full stack trace. boolean true quarkus.native.report-errors-at-runtime Reports errors at runtime. This might cause your application to fail at runtime if you use unsupported features. boolean false quarkus.native.resources.includes A comma-separated list of globs to match resource paths that should be added to the native image. Use a slash ( / ) character as a path separator on all platforms. Globs must not start with a slash. For example, if you have src/main/resources/ignored.png and src/main/resources/foo/selected.png in your source tree and one of your dependency JARs contains a bar/some.txt file, with quarkus.native.resources.includes set to foo/ ,bar/ /*.txt , the files src/main/resources/foo/selected.png and bar/some.txt will be included in the native image, while src/main/resources/ignored.png will not be included. For more information, see the following table, which lists the supported glob features. list of strings quarkus.native.debug.enabled Enables debugging and generates debug symbols in a separate .debug file. When used with quarkus.native.container-build , Red Hat build of Quarkus only supports Red Hat Enterprise Linux or other Linux distributions as they contain the binutils package that installs the objcopy utility that splits the debug info from the native image. boolean false Supported glob features The following table lists the supported glob features and descriptions: Character Feature description * Matches a possibly-empty sequence of characters that does not contain slash ( / ). ** Matches a possibly-empty sequence of characters that might contain slash ( / ). ? Matches one character, but not slash. [abc] Matches one character specified in the bracket, but not slash. [a-z] Matches one character from the range specified in the bracket, but not slash. [!abc] Matches one character not specified in the bracket; does not match slash. [!a-z] Matches one character outside the range specified in the bracket; does not match slash. {one,two,three} Matches any of the alternating tokens separated by commas; the tokens can contain wildcards, nested alternations, and ranges. \ The escape character. There are three levels of escaping: application.properties parser, MicroProfile Config list converter, and Glob parser. All three levels use the backslash as the escape character. Additional resources Configuring your Red Hat build of Quarkus applications 1.3.1. Configuring memory consumption for Red Hat build of Quarkus native compilation Compiling a Red Hat build of Quarkus application to a native executable consumes a large amount of memory during analysis and optimization. You can limit the amount of memory used during native compilation by setting the quarkus.native.native-image-xmx configuration property. Setting low memory limits might increase the build time. Procedure Use one of the following methods to set a value for the quarkus.native.native-image-xmx property to limit the memory consumption during the native image build time: Using the application.properties file: quarkus.native.native-image-xmx= <maximum_memory> Setting system properties: mvn package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.native-image-xmx=<maximum_memory> This command builds the native executable with Docker. To use Podman, add the -Dquarkus.native.container-runtime=podman argument. Note For example, to set the memory limit to 6 GB, enter quarkus.native.native-image-xmx=6g . The value must be a multiple of 1024 and greater than 2MB. Append the letter m or M to indicate megabytes, or g or G to indicate gigabytes. 1.4. Testing the native executable Test the application in native mode to test the functionality of the native executable. Use the @QuarkusIntegrationTest annotation to build the native executable and run tests against the HTTP endpoints. Important The following example shows how to test a native executable with a local installation of GraalVM or Mandrel. Before you begin, consider the following points: This scenario is not supported by Red Hat build of Quarkus, as outlined in Producing a native executable . The native executable you are testing with here must match the operating system and architecture of the host. Therefore, this procedure will not work on a macOS or an in-container build. Procedure Open the pom.xml file and verify that the build section has the following elements: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner</native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin> The Maven Failsafe plugin ( maven-failsafe-plugin ) runs the integration test and indicates the location of the native executable that is generated. Open the src/test/java/org/acme/GreetingResourceIT.java file and verify that it includes the following content: package org.acme; import io.quarkus.test.junit.QuarkusIntegrationTest; @QuarkusIntegrationTest 1 public class GreetingResourceIT extends GreetingResourceTest { 2 // Execute the same tests but in native mode. } 1 Use another test runner that starts the application from the native file before the tests. The executable is retrieved by using the native.image.path system property configured in the Maven Failsafe plugin. 2 This example extends the GreetingResourceTest , but you can also create a new test. Run the test: ./mvnw verify -Dnative The following example shows the output of this command: ./mvnw verify -Dnative .... GraalVM Native Image: Generating 'getting-started-1.0.0-SNAPSHOT-runner' (executable)... ======================================================================================================================== [1/8] Initializing... (6.6s @ 0.22GB) Java version: 17.0.7+7, vendor version: Mandrel-23.0.0.0-Final Graal compiler: optimization level: 2, target machine: x86-64-v3 C compiler: gcc (redhat, x86_64, 13.2.1) Garbage collector: Serial GC (max heap size: 80% of RAM) 2 user-specific feature(s) - io.quarkus.runner.Feature: Auto-generated class by Red Hat build of Quarkus from the existing extensions - io.quarkus.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase [2/8] Performing analysis... [******] (40.0s @ 2.05GB) 10,318 (86.40%) of 11,942 types reachable 15,064 (57.36%) of 26,260 fields reachable 52,128 (55.75%) of 93,501 methods reachable 3,298 types, 109 fields, and 2,698 methods registered for reflection 63 types, 68 fields, and 55 methods registered for JNI access 4 native libraries: dl, pthread, rt, z [3/8] Building universe... (5.9s @ 1.31GB) [4/8] Parsing methods... [**] (3.7s @ 2.08GB) [5/8] Inlining methods... [***] (2.0s @ 1.92GB) [6/8] Compiling methods... [******] (34.4s @ 3.25GB) [7/8] Layouting methods... [[7/8] Layouting methods... [**] (4.1s @ 1.78GB) [8/8] Creating image... [**] (4.5s @ 2.31GB) 20.93MB (48.43%) for code area: 33,233 compilation units 21.95MB (50.80%) for image heap: 285,664 objects and 8 resources 337.06kB ( 0.76%) for other data 43.20MB in total .... [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:integration-test (default) @ getting-started --- [INFO] Using auto detected provider org.apache.maven.surefire.junitplatform.JUnitPlatformProvider [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.acme.GreetingResourceIT __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ 2023-08-28 14:04:52,681 INFO [io.quarkus] (main) getting-started 1.0.0-SNAPSHOT native (powered by Red Hat build of Quarkus 3.2.9.Final) started in 0.038s. Listening on: http://0.0.0.0:8081 2023-08-28 14:04:52,682 INFO [io.quarkus] (main) Profile prod activated. 2023-08-28 14:04:52,682 INFO [io.quarkus] (main) Installed features: [cdi, resteasy-reactive, smallrye-context-propagation, vertx] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.696 s - in org.acme.GreetingResourceIT [INFO] [INFO] Results: [INFO] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:verify (default) @ getting-started --- Note Quarkus waits 60 seconds for the native image to start before automatically failing the native tests. You can change this duration by configuring the quarkus.test.wait-time system property. You can extend the wait time by using the following command where <duration> is the wait time in seconds: Note Native tests run using the prod profile by default unless modified in the quarkus.test.native-image-profile property. 1.4.1. Excluding tests when running as a native executable When you run tests against your native executable, you can only run black-box testing, for example, interacting with the HTTP endpoints of your application. Note Black box refers to the hidden internal workings of a product or program, such as in black-box testing. Because tests do not run natively, you cannot link against your application's code like you do when running tests on the JVM. Therefore, in your native tests, you cannot inject beans. You can share your test class between your JVM and native executions and exclude certain tests using the @DisabledOnNativeImage annotation to run tests only on the JVM. 1.4.2. Testing an existing native executable By using the Failsafe Maven plugin, you can test against the existing executable build. You can run multiple sets of tests in stages on the binary after it is built. Note To test the native executable that you produced with Quarkus, use the available Maven commands. There are no equivalent Quarkus CLI commands to complete this task by using the command line. Procedure Run a test against a native executable that is already built: ./mvnw test-compile failsafe:integration-test This command runs the test against the existing native image by using the Failsafe Maven plugin. Alternatively, you can specify the path to the native executable with the following command where <path> is the native image path: ./mvnw test-compile failsafe:integration-test -Dnative.image.path= <path> 1.5. Additional resources Deploying your Red Hat build of Quarkus applications to OpenShift Container Platform Developing and compiling your Red Hat build of Quarkus applications with Apache Maven Quarkus community: Building a native executable Apache Maven Project The UBI Image Page The UBI-minimal Image Page The List of UBI-minimal Tags Revised on 2024-10-10 17:18:46 UTC
[ "<profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <properties> <skipITs>false</skipITs> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles>", "./mvnw package -Dnative -Dquarkus.native.container-build=true", "./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman", "quarkus build --native -Dquarkus.native.container-build=true", "quarkus build --native -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman", "./target/*-runner", "./mvnw package -Dnative", "quarkus build --native", "./target/*-runner", "FROM registry.access.redhat.com/ubi8/ubi-minimal:8.8 WORKDIR /work/ RUN chown 1001 /work && chmod \"g+rwX\" /work && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 ENTRYPOINT [\"./application\", \"-Dquarkus.http.host=0.0.0.0\"]", "registry.access.redhat.com/ubi8/ubi:8.8", "registry.access.redhat.com/ubi8/ubi-minimal:8.8", "./mvnw package -Dnative -Dquarkus.native.container-build=true", "./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman", "docker build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started .", "build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started .", "docker run -i --rm -p 8080:8080 quarkus-quickstart/getting-started", "run -i --rm -p 8080:8080 quarkus-quickstart/getting-started", "login -u <username_url>", "new-project <project_name>", "cat src/main/docker/Dockerfile.native | oc new-build --name <build_name> --strategy=docker --dockerfile -", "start-build <build_name> --from-dir .", "new-app <build_name>", "expose svc/ <build_name>", "quarkus.native.native-image-xmx= <maximum_memory>", "mvn package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.native-image-xmx=<maximum_memory>", "<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner</native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin>", "package org.acme; import io.quarkus.test.junit.QuarkusIntegrationTest; @QuarkusIntegrationTest 1 public class GreetingResourceIT extends GreetingResourceTest { 2 // Execute the same tests but in native mode. }", "./mvnw verify -Dnative", "./mvnw verify -Dnative . GraalVM Native Image: Generating 'getting-started-1.0.0-SNAPSHOT-runner' (executable) ======================================================================================================================== [1/8] Initializing... (6.6s @ 0.22GB) Java version: 17.0.7+7, vendor version: Mandrel-23.0.0.0-Final Graal compiler: optimization level: 2, target machine: x86-64-v3 C compiler: gcc (redhat, x86_64, 13.2.1) Garbage collector: Serial GC (max heap size: 80% of RAM) 2 user-specific feature(s) - io.quarkus.runner.Feature: Auto-generated class by Red Hat build of Quarkus from the existing extensions - io.quarkus.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase [2/8] Performing analysis... [******] (40.0s @ 2.05GB) 10,318 (86.40%) of 11,942 types reachable 15,064 (57.36%) of 26,260 fields reachable 52,128 (55.75%) of 93,501 methods reachable 3,298 types, 109 fields, and 2,698 methods registered for reflection 63 types, 68 fields, and 55 methods registered for JNI access 4 native libraries: dl, pthread, rt, z [3/8] Building universe... (5.9s @ 1.31GB) [4/8] Parsing methods... [**] (3.7s @ 2.08GB) [5/8] Inlining methods... [***] (2.0s @ 1.92GB) [6/8] Compiling methods... [******] (34.4s @ 3.25GB) [7/8] Layouting methods... [[7/8] Layouting methods... [**] (4.1s @ 1.78GB) [8/8] Creating image... [**] (4.5s @ 2.31GB) 20.93MB (48.43%) for code area: 33,233 compilation units 21.95MB (50.80%) for image heap: 285,664 objects and 8 resources 337.06kB ( 0.76%) for other data 43.20MB in total . [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:integration-test (default) @ getting-started --- [INFO] Using auto detected provider org.apache.maven.surefire.junitplatform.JUnitPlatformProvider [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.acme.GreetingResourceIT __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ 2023-08-28 14:04:52,681 INFO [io.quarkus] (main) getting-started 1.0.0-SNAPSHOT native (powered by Red Hat build of Quarkus 3.2.9.Final) started in 0.038s. Listening on: http://0.0.0.0:8081 2023-08-28 14:04:52,682 INFO [io.quarkus] (main) Profile prod activated. 2023-08-28 14:04:52,682 INFO [io.quarkus] (main) Installed features: [cdi, resteasy-reactive, smallrye-context-propagation, vertx] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.696 s - in org.acme.GreetingResourceIT [INFO] [INFO] Results: [INFO] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:verify (default) @ getting-started ---", "./mvnw verify -Dnative -Dquarkus.test.wait-time= <duration>", "./mvnw test-compile failsafe:integration-test", "./mvnw test-compile failsafe:integration-test -Dnative.image.path= <path>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/compiling_your_red_hat_build_of_quarkus_applications_to_native_executables/assembly_quarkus-building-native-executable_quarkus-building-native-executable
Chapter 3. ConsoleExternalLogLink [console.openshift.io/v1]
Chapter 3. ConsoleExternalLogLink [console.openshift.io/v1] Description ConsoleExternalLogLink is an extension for customizing OpenShift web console log links. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleExternalLogLinkSpec is the desired log link configuration. The log link will appear on the logs tab of the pod details page. 3.1.1. .spec Description ConsoleExternalLogLinkSpec is the desired log link configuration. The log link will appear on the logs tab of the pod details page. Type object Required hrefTemplate text Property Type Description hrefTemplate string hrefTemplate is an absolute secure URL (must use https) for the log link including variables to be replaced. Variables are specified in the URL with the format USD{variableName}, for instance, USD{containerName} and will be replaced with the corresponding values from the resource. Resource is a pod. Supported variables are: - USD{resourceName} - name of the resource which containes the logs - USD{resourceUID} - UID of the resource which contains the logs - e.g. 11111111-2222-3333-4444-555555555555 - USD{containerName} - name of the resource's container that contains the logs - USD{resourceNamespace} - namespace of the resource that contains the logs - USD{resourceNamespaceUID} - namespace UID of the resource that contains the logs - USD{podLabels} - JSON representation of labels matching the pod with the logs - e.g. {"key1":"value1","key2":"value2"} e.g., https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} namespaceFilter string namespaceFilter is a regular expression used to restrict a log link to a matching set of namespaces (e.g., ^openshift- ). The string is converted into a regular expression using the JavaScript RegExp constructor. If not specified, links will be displayed for all the namespaces. text string text is the display text for the link 3.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleexternalloglinks DELETE : delete collection of ConsoleExternalLogLink GET : list objects of kind ConsoleExternalLogLink POST : create a ConsoleExternalLogLink /apis/console.openshift.io/v1/consoleexternalloglinks/{name} DELETE : delete a ConsoleExternalLogLink GET : read the specified ConsoleExternalLogLink PATCH : partially update the specified ConsoleExternalLogLink PUT : replace the specified ConsoleExternalLogLink /apis/console.openshift.io/v1/consoleexternalloglinks/{name}/status GET : read status of the specified ConsoleExternalLogLink PATCH : partially update status of the specified ConsoleExternalLogLink PUT : replace status of the specified ConsoleExternalLogLink 3.2.1. /apis/console.openshift.io/v1/consoleexternalloglinks Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ConsoleExternalLogLink Table 3.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleExternalLogLink Table 3.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLinkList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleExternalLogLink Table 3.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.7. Body parameters Parameter Type Description body ConsoleExternalLogLink schema Table 3.8. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 201 - Created ConsoleExternalLogLink schema 202 - Accepted ConsoleExternalLogLink schema 401 - Unauthorized Empty 3.2.2. /apis/console.openshift.io/v1/consoleexternalloglinks/{name} Table 3.9. Global path parameters Parameter Type Description name string name of the ConsoleExternalLogLink Table 3.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ConsoleExternalLogLink Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.12. Body parameters Parameter Type Description body DeleteOptions schema Table 3.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleExternalLogLink Table 3.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.15. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleExternalLogLink Table 3.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.17. Body parameters Parameter Type Description body Patch schema Table 3.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleExternalLogLink Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body ConsoleExternalLogLink schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 201 - Created ConsoleExternalLogLink schema 401 - Unauthorized Empty 3.2.3. /apis/console.openshift.io/v1/consoleexternalloglinks/{name}/status Table 3.22. Global path parameters Parameter Type Description name string name of the ConsoleExternalLogLink Table 3.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ConsoleExternalLogLink Table 3.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.25. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ConsoleExternalLogLink Table 3.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.27. Body parameters Parameter Type Description body Patch schema Table 3.28. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ConsoleExternalLogLink Table 3.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.30. Body parameters Parameter Type Description body ConsoleExternalLogLink schema Table 3.31. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 201 - Created ConsoleExternalLogLink schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/console_apis/consoleexternalloglink-console-openshift-io-v1
Chapter 11. Enabling the Red Hat OpenShift Data Foundation console plugin
Chapter 11. Enabling the Red Hat OpenShift Data Foundation console plugin Enable the console plugin option if it was not automatically enabled after you installed the OpenShift Data Foundation Operator. The console plugin provides a custom interface that is included in the Web Console. You can enable the console plugin option either from the graphical user interface (GUI) or command-line interface. Prerequisites You have administrative access to the OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Procedure From user interface In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator. Enable the console plugin option. In the Details tab, click the pencil icon under Console plugin . Select Enable , and click Save . From command-line interface Execute the following command to enable the console plugin option: Verification steps After the console plugin option is enabled, a pop-up with a message, Web console update is available appears on the GUI. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if OpenShift Data Foundation is available.
[ "oc patch console.operator cluster -n openshift-storage --type json -p '[{\"op\": \"add\", \"path\": \"/spec/plugins\", \"value\": [\"odf-console\"]}]'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/troubleshooting_openshift_data_foundation/enabling-the-red-hat-openshift-data-foundation-console-plugin-option_rhodf
CI/CD overview
CI/CD overview OpenShift Dedicated 4 Contains information about CI/CD for OpenShift Dedicated Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/cicd_overview/index
Chapter 2. Configuring your firewall
Chapter 2. Configuring your firewall If you use a firewall, you must configure it so that OpenShift Container Platform can access the sites that it requires to function. You must always grant access to some sites, and you grant access to more if you use Red Hat Insights, the Telemetry service, a cloud to host your cluster, and certain build strategies. 2.1. Configuring your firewall for OpenShift Container Platform Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. When using a firewall, make additional configurations to the firewall so that OpenShift Container Platform can access the sites that it requires to function. There are no special configuration considerations for services running on only controller nodes compared to worker nodes. Note If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster. Procedure Set the following registry URLs for your firewall's allowlist: URL Port Function registry.redhat.io 443 Provides core container images access.redhat.com 443 Hosts a signature store that a container client requires for verifying images pulled from registry.access.redhat.com . In a firewall environment, ensure that this resource is on the allowlist. registry.access.redhat.com 443 Hosts all the container images that are stored on the Red Hat Ecosystem Catalog, including core container images. quay.io 443 Provides core container images cdn.quay.io 443 Provides core container images cdn01.quay.io 443 Provides core container images cdn02.quay.io 443 Provides core container images cdn03.quay.io 443 Provides core container images cdn04.quay.io 443 Provides core container images cdn05.quay.io 443 Provides core container images cdn06.quay.io 443 Provides core container images sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com You can use the wildcards *.quay.io and *.openshiftapps.com instead of cdn.quay.io and cdn0[1-6].quay.io in your allowlist. You can use the wildcard *.access.redhat.com to simplify the configuration and ensure that all subdomains, including registry.access.redhat.com , are allowed. When you add a site, such as quay.io , to your allowlist, do not add a wildcard entry, such as *.quay.io , to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io . Set your firewall's allowlist to include any site that provides resources for a language or framework that your builds require. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights: URL Port Function cert-api.access.redhat.com 443 Required for Telemetry api.access.redhat.com 443 Required for Telemetry infogw.api.openshift.com 443 Required for Telemetry console.redhat.com 443 Required for Telemetry and for insights-operator If you use Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to host your cluster, you must grant access to the URLs that offer the cloud provider API and DNS for that cloud: Cloud URL Port Function Alibaba *.aliyuncs.com 443 Required to access Alibaba Cloud services and resources. Review the Alibaba endpoints_config.go file to find the exact endpoints to allow for the regions that you use. AWS aws.amazon.com 443 Used to install and manage clusters in an AWS environment. *.amazonaws.com Alternatively, if you choose to not use a wildcard for AWS APIs, you must include the following URLs in your allowlist: 443 Required to access AWS services and resources. Review the AWS Service Endpoints in the AWS documentation to find the exact endpoints to allow for the regions that you use. ec2.amazonaws.com 443 Used to install and manage clusters in an AWS environment. events.amazonaws.com 443 Used to install and manage clusters in an AWS environment. iam.amazonaws.com 443 Used to install and manage clusters in an AWS environment. route53.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.dualstack.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. tagging.us-east-1.amazonaws.com 443 Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1 , regardless of the region the cluster is deployed in. ec2.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. elasticloadbalancing.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. servicequotas.<aws_region>.amazonaws.com 443 Required. Used to confirm quotas for deploying the service. tagging.<aws_region>.amazonaws.com 443 Allows the assignment of metadata about AWS resources in the form of tags. *.cloudfront.net 443 Used to provide access to CloudFront. If you use the AWS Security Token Service (STS) and the private S3 bucket, you must provide access to CloudFront. GCP *.googleapis.com 443 Required to access GCP services and resources. Review Cloud Endpoints in the GCP documentation to find the endpoints to allow for your APIs. accounts.google.com 443 Required to access your GCP account. Microsoft Azure management.azure.com 443 Required to access Microsoft Azure services and resources. Review the Microsoft Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. *.blob.core.windows.net 443 Required to download Ignition files. login.microsoftonline.com 443 Required to access Microsoft Azure services and resources. Review the Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. Allowlist the following URLs: URL Port Function *.apps.<cluster_name>.<base_domain> 443 Required to access the default cluster routes unless you set an ingress wildcard during installation. api.openshift.com 443 Required both for your cluster token and to check if updates are available for the cluster. console.redhat.com 443 Required for your cluster token. mirror.openshift.com 443 Required to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator needs only a single functioning source. quayio-production-s3.s3.amazonaws.com 443 Required to access Quay image content in AWS. rhcos.mirror.openshift.com 443 Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images. sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com storage.googleapis.com/openshift-release 443 A source of release image signatures, although the Cluster Version Operator needs only a single functioning source. Operators require route access to perform health checks. Specifically, the authentication and web console Operators connect to two routes to verify that the routes work. If you are the cluster administrator and do not want to allow *.apps.<cluster_name>.<base_domain> , then allow these routes: oauth-openshift.apps.<cluster_name>.<base_domain> canary-openshift-ingress-canary.apps.<cluster_name>.<base_domain> console-openshift-console.apps.<cluster_name>.<base_domain> , or the hostname that is specified in the spec.route.hostname field of the consoles.operator/cluster object if the field is not empty. Allowlist the following URLs for optional third-party content: URL Port Function registry.connect.redhat.com 443 Required for all third-party images and certified operators. rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com 443 Provides access to container images hosted on registry.connect.redhat.com oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com 443 Required for Sonatype Nexus, F5 Big IP operators. If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs: 1.rhel.pool.ntp.org 2.rhel.pool.ntp.org 3.rhel.pool.ntp.org Note If you do not use a default Red Hat NTP server, verify the NTP server for your platform and allow it in your firewall. Additional resources OpenID Connect requirements for AWS STS 2.2. OpenShift Container Platform network flow matrix The network flow matrix describes the ingress flows to OpenShift Container Platform services. The network information in the matrix is accurate for both bare-metal and cloud environments. Use the information in the network flow matrix to help you manage ingress traffic. You can restrict ingress traffic to essential flows to improve network security. To view or download the raw CSV content, see this resource . Additionally, consider the following dynamic port ranges when managing ingress traffic: 9000-9999 : Host level services 30000-32767 : Kubernetes node ports 49152-65535 : Dynamic or private ports Note The network flow matrix describes ingress traffic flows for a base OpenShift Container Platform installation. It does not describe network flows for additional components, such as optional Operators available from the Red Hat Marketplace. The matrix does not apply for hosted control planes, Red Hat build of MicroShift, or standalone clusters. Table 2.1. Network flow matrix Direction Protocol Port Namespace Service Pod Container Node Role Optional Ingress TCP 22 Host system service sshd master TRUE Ingress TCP 53 openshift-dns dns-default dnf-default dns master FALSE Ingress TCP 80 openshift-ingress router-default router-default router master FALSE Ingress TCP 111 Host system service rpcbind master TRUE Ingress TCP 443 openshift-ingress router-default router-default router master FALSE Ingress TCP 1936 openshift-ingress router-default router-default router master FALSE Ingress TCP 2379 openshift-etcd etcd etcd etcdctl master FALSE Ingress TCP 2380 openshift-etcd healthz etcd etcd master FALSE Ingress TCP 5050 openshift-machine-api ironic-proxy ironic-proxy master FALSE Ingress TCP 5051 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 6080 openshift-kube-apiserver kube-apiserver kube-apiserver-insecure-readyz master FALSE Ingress TCP 6180 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 6183 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 6385 openshift-machine-api ironic-proxy ironic-proxy master FALSE Ingress TCP 6388 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 6443 openshift-kube-apiserver apiserver kube-apiserver kube-apiserver master FALSE Ingress TCP 8080 openshift-network-operator network-operator network-operator master FALSE Ingress TCP 8798 openshift-machine-config-operator machine-config-daemon machine-config-daemon machine-config-daemon master FALSE Ingress TCP 9001 openshift-machine-config-operator machine-config-daemon machine-config-daemon kube-rbac-proxy master FALSE Ingress TCP 9099 openshift-cluster-version cluster-version-operator cluster-version-operator cluster-version-operator master FALSE Ingress TCP 9100 openshift-monitoring node-exporter node-exporter kube-rbac-proxy master FALSE Ingress TCP 9103 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-node master FALSE Ingress TCP 9104 openshift-network-operator metrics network-operator network-operator master FALSE Ingress TCP 9105 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-ovn-metrics master FALSE Ingress TCP 9107 openshift-ovn-kubernetes egressip-node-healthcheck ovnkube-node ovnkube-controller master FALSE Ingress TCP 9108 openshift-ovn-kubernetes ovn-kubernetes-control-plane ovnkube-control-plane kube-rbac-proxy master FALSE Ingress TCP 9192 openshift-cluster-machine-approver machine-approver machine-approver kube-rbac-proxy master FALSE Ingress TCP 9258 openshift-cloud-controller-manager-operator machine-approver cluster-cloud-controller-manager cluster-cloud-controller-manager master FALSE Ingress TCP 9444 openshift-kni-infra haproxy haproxy master FALSE Ingress TCP 9445 openshift-kni-infra haproxy haproxy master FALSE Ingress TCP 9447 openshift-machine-api metal3-baremetal-operator master FALSE Ingress TCP 9537 Host system service crio-metrics master FALSE Ingress TCP 9637 openshift-machine-config-operator kube-rbac-proxy-crio kube-rbac-proxy-crio kube-rbac-proxy-crio master FALSE Ingress TCP 9978 openshift-etcd etcd etcd etcd-metrics master FALSE Ingress TCP 9979 openshift-etcd etcd etcd etcd-metrics master FALSE Ingress TCP 9980 openshift-etcd etcd etcd etcd master FALSE Ingress TCP 10250 Host system service kubelet master FALSE Ingress TCP 10256 openshift-ovn-kubernetes ovnkube ovnkube ovnkube-controller master FALSE Ingress TCP 10257 openshift-kube-controller-manager kube-controller-manager kube-controller-manager kube-controller-manager master FALSE Ingress TCP 10258 openshift-cloud-controller-manager-operator cloud-controller cloud-controller-manager cloud-controller-manager master FALSE Ingress TCP 10259 openshift-kube-scheduler scheduler openshift-kube-scheduler kube-scheduler master FALSE Ingress TCP 10260 openshift-cloud-controller-manager-operator cloud-controller cloud-controller-manager cloud-controller-manager master FALSE Ingress TCP 10300 openshift-cluster-csi-drivers csi-livenessprobe csi-driver-node csi-driver master FALSE Ingress TCP 10309 openshift-cluster-csi-drivers csi-node-driver csi-driver-node csi-node-driver-registrar master FALSE Ingress TCP 10357 openshift-kube-apiserver openshift-kube-apiserver-healthz kube-apiserver kube-apiserver-check-endpoints master FALSE Ingress TCP 17697 openshift-kube-apiserver openshift-kube-apiserver-healthz kube-apiserver kube-apiserver-check-endpoints master FALSE Ingress TCP 18080 openshift-kni-infra coredns coredns master FALSE Ingress TCP 22623 openshift-machine-config-operator machine-config-server machine-config-server machine-config-server master FALSE Ingress TCP 22624 openshift-machine-config-operator machine-config-server machine-config-server machine-config-server master FALSE Ingress UDP 53 openshift-dns dns-default dnf-default dns master FALSE Ingress UDP 111 Host system service rpcbind master TRUE Ingress UDP 6081 openshift-ovn-kubernetes ovn-kubernetes geneve master FALSE Ingress TCP 22 Host system service sshd worker TRUE Ingress TCP 53 openshift-dns dns-default dnf-default dns worker FALSE Ingress TCP 80 openshift-ingress router-default router-default router worker FALSE Ingress TCP 111 Host system service rpcbind worker TRUE Ingress TCP 443 openshift-ingress router-default router-default router worker FALSE Ingress TCP 1936 openshift-ingress router-default router-default router worker FALSE Ingress TCP 8798 openshift-machine-config-operator machine-config-daemon machine-config-daemon machine-config-daemon worker FALSE Ingress TCP 9001 openshift-machine-config-operator machine-config-daemon machine-config-daemon kube-rbac-proxy worker FALSE Ingress TCP 9100 openshift-monitoring node-exporter node-exporter kube-rbac-proxy worker FALSE Ingress TCP 9103 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-node worker FALSE Ingress TCP 9105 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-ovn-metrics worker FALSE Ingress TCP 9107 openshift-ovn-kubernetes egressip-node-healthcheck ovnkube-node ovnkube-controller worker FALSE Ingress TCP 9537 Host system service crio-metrics worker FALSE Ingress TCP 9637 openshift-machine-config-operator kube-rbac-proxy-crio kube-rbac-proxy-crio kube-rbac-proxy-crio worker FALSE Ingress TCP 10250 Host system service kubelet worker FALSE Ingress TCP 10256 openshift-ovn-kubernetes ovnkube ovnkube ovnkube-controller worker TRUE Ingress TCP 10300 openshift-cluster-csi-drivers csi-livenessprobe csi-driver-node csi-driver worker FALSE Ingress TCP 10309 openshift-cluster-csi-drivers csi-node-driver csi-driver-node csi-node-driver-registrar worker FALSE Ingress TCP 18080 openshift-kni-infra coredns coredns worker FALSE Ingress UDP 53 openshift-dns dns-default dnf-default dns worker FALSE Ingress UDP 111 Host system service rpcbind worker TRUE Ingress UDP 6081 openshift-ovn-kubernetes ovn-kubernetes geneve worker FALSE
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installation_configuration/configuring-firewall
Chapter 64. region
Chapter 64. region This chapter describes the commands under the region command. 64.1. region create Create new region Usage: Table 64.1. Positional arguments Value Summary <region-id> New region id Table 64.2. Command arguments Value Summary -h, --help Show this help message and exit --parent-region <region-id> Parent region id --description <description> New region description Table 64.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 64.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 64.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 64.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 64.2. region delete Delete region(s) Usage: Table 64.7. Positional arguments Value Summary <region-id> Region id(s) to delete Table 64.8. Command arguments Value Summary -h, --help Show this help message and exit 64.3. region list List regions Usage: Table 64.9. Command arguments Value Summary -h, --help Show this help message and exit --parent-region <region-id> Filter by parent region id Table 64.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 64.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 64.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 64.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 64.4. region set Set region properties Usage: Table 64.14. Positional arguments Value Summary <region-id> Region to modify Table 64.15. Command arguments Value Summary -h, --help Show this help message and exit --parent-region <region-id> New parent region id --description <description> New region description 64.5. region show Display region details Usage: Table 64.16. Positional arguments Value Summary <region-id> Region to display Table 64.17. Command arguments Value Summary -h, --help Show this help message and exit Table 64.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 64.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 64.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 64.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack region create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--parent-region <region-id>] [--description <description>] <region-id>", "openstack region delete [-h] <region-id> [<region-id> ...]", "openstack region list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--parent-region <region-id>]", "openstack region set [-h] [--parent-region <region-id>] [--description <description>] <region-id>", "openstack region show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <region-id>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/region
9.4. New Resource Agents
9.4. New Resource Agents Red Hat Enterprise Linux 7 includes a number of resource agents. A resource agent is a standardized interface for a cluster resource. A resource agent translates a standard set of operations into steps specific to the resource or application, and interprets their results as success or failure.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-clustering-new_resource_agents
Networking
Networking OpenShift Container Platform 4.11 Configuring and managing cluster networking Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/index
8.5. Additional Resources
8.5. Additional Resources The following are resources which explain more about network interfaces. 8.5.1. Installed Documentation /usr/share/doc/initscripts- <version> /sysconfig.txt - A guide to available options for network configuration files, including IPv6 options not covered in this chapter. /usr/share/doc/iproute- <version> /ip-cref.ps - This file contains a wealth of information about the ip command, which can be used to manipulate routing tables, among other things. Use the ggv or kghostview application to view this file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-networkscripts-resources
27.2.2. Running the At Service
27.2.2. Running the At Service The At and Batch jobs are both picked by the atd service. This section provides information on how to start, stop, and restart the atd service, and shows how to enable it in a particular runlevel. For more information on the concept of runlevels and how to manage system services in Red Hat Enterprise Linux in general, see Chapter 12, Services and Daemons . 27.2.2.1. Starting and Stopping the At Service To determine if the service is running, use the command service atd status . To run the atd service in the current session, type the following at a shell prompt as root : service atd start To configure the service to start automatically at boot, use the following command: chkconfig atd on Note It is recommended to start the service at boot automatically. This command enables the service in runlevel 2, 3, 4, and 5. Alternatively, you can use the Service Configuration utility as described in Section 12.2.1.1, "Enabling and Disabling a Service" . 27.2.2.2. Stopping the At Service To stop the atd service, type the following at a shell prompt as root service atd stop To disable starting the service at boot time, use the following command: chkconfig atd off This command disables the service in all runlevels. Alternatively, you can use the Service Configuration utility as described in Section 12.2.1.1, "Enabling and Disabling a Service" . 27.2.2.3. Restarting the At Service To restart the atd service, type the following at a shell prompt: service atd restart This command stops the service and starts it again in quick succession.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-atd-running
Chapter 9. Managing your installation manager configurations using the Management CLI
Chapter 9. Managing your installation manager configurations using the Management CLI The Management CLI enables you to manage your installation manager configuration. You can perform actions like viewing channels, subscribing, and unsubscribing from channels. For more information about managing channels on JBoss EAP 8.0 see JBoss EAP channels in the Installation guide . 9.1. Listing subscribed channels The following example describes how to use the Management CLI to list all the channels you are subscribed to in a stand-alone server and managed domain. Procedure Launch the Management CLI: EAP_HOME/bin/jboss-cli.sh List subscribed channels in your JBoss EAP server: List channels in a stand-alone server: [standalone@localhost:9990 /] installer channel-list List channels in a managed domain: [domain@localhost:9990 /] installer channel-list --host=target-host 9.2. Subscribing to a channel The following example describes how to use the Management CLI to subscribe to a channel in a stand-alone server and a managed domain. Procedure Launch the Management CLI: EAP_HOME/bin/jboss-cli.sh Subscribe to a channel: Subscribe to a channel in stand-alone server: [standalone@localhost:9990 /] installer channel-add --channel-name=test-channel --manifest=org.test:test-manifest --repositories=mrrc::https://maven.repository.redhat.com Subscribe to a channel in a managed domain: [domain@localhost:9990 /] installer channel-add --host=target-host --channel-name=test-channel --manifest=org.test:test-manifest --repositories=mrrc::https://maven.repository.redhat.com 9.3. Unsubscribing from a channel The following example describes how to use the Management CLI to unsubscribe from a channel in a stand-alone server and a managed domain. Procedure Launch the Management CLI: EAP_HOME/bin/jboss-cli.sh Unsubscribe to a channel: Unsubscribe to a channel in a stand-alone server: [standalone@localhost:9990 /] installer channel-remove --channel-name=test-channel Unsubscribe to a channel in a managed domain: [domain@localhost:9990 /] installer channel-remove --host=target-host --channel-name=test-channel 9.4. Editing a channel subscription The following example describes editing a channel subscription using the Management CLI. Procedure Launch the Management CLI: EAP_HOME/bin/jboss-cli.sh Edit a channel subscription in JBoss EAP: Edit a channel subscription in a stand-alone server: [standalone@localhost:9990 /] installer channel-edit --channel-name=channel-name --manifest=org.new.test:org.new.test-manifest --repositories=mrrc::https://maven.repository.redhat.com ,mvncentral::https://repo.maven.apache.org/maven2/ Edit a channel subscription in a managed domain: [domain@localhost:9990 /] installer channel-edit --host=target-host --channel-name=channel-name --manifest=org.new.test:org.new.test-manifest --repositories=mrrc::https://maven.repository.redhat.com ,mvncentral::https://repo.maven.apache.org/maven2/ 9.5. Exporting a server snapshot You can export the installation manager configuration used by your JBoss EAP server to an archive file. This file can be used to recreate the same server configuration using the jboss-eap-installation-manager tool. This feature ensures easy replication of your server setup in other environments. For information see exporting a server snapshot using the jboss-eap-installation-manager . One compelling use case for this capability is when you encounter a technical issue and need to seek support from the JBoss EAP team. By sharing the exported archive file with the support team, they can recreate your server configuration precisely as it is in your environment. This ensures that the support team can replicate the issue you are facing and provide targeted assistance, ultimately expediting the resolution process. You can also provide more details about your environment by following the steps in the JDR tool documentation . Procedure Launch the Management CLI: EAP_HOME/bin/jboss-cli.sh Export a server snapshot: Export a server snapshot in a stand-alone server: standalone@localhost:9990 /] attachment save --operation=/core-service=installer:clone-export() Export a server snapshot in a managed domain: [domain@localhost:9990 /] attachment save --operation=/host=target-host/core-service=installer:clone-export() Additional resources JBoss EAP channels in the Installation guide .
[ "EAP_HOME/bin/jboss-cli.sh", "[standalone@localhost:9990 /] installer channel-list", "[domain@localhost:9990 /] installer channel-list --host=target-host", "EAP_HOME/bin/jboss-cli.sh", "[standalone@localhost:9990 /] installer channel-add --channel-name=test-channel --manifest=org.test:test-manifest --repositories=mrrc::https://maven.repository.redhat.com", "[domain@localhost:9990 /] installer channel-add --host=target-host --channel-name=test-channel --manifest=org.test:test-manifest --repositories=mrrc::https://maven.repository.redhat.com", "EAP_HOME/bin/jboss-cli.sh", "[standalone@localhost:9990 /] installer channel-remove --channel-name=test-channel", "[domain@localhost:9990 /] installer channel-remove --host=target-host --channel-name=test-channel", "EAP_HOME/bin/jboss-cli.sh", "[standalone@localhost:9990 /] installer channel-edit --channel-name=channel-name --manifest=org.new.test:org.new.test-manifest --repositories=mrrc::https://maven.repository.redhat.com ,mvncentral::https://repo.maven.apache.org/maven2/", "[domain@localhost:9990 /] installer channel-edit --host=target-host --channel-name=channel-name --manifest=org.new.test:org.new.test-manifest --repositories=mrrc::https://maven.repository.redhat.com ,mvncentral::https://repo.maven.apache.org/maven2/", "EAP_HOME/bin/jboss-cli.sh", "standalone@localhost:9990 /] attachment save --operation=/core-service=installer:clone-export()", "[domain@localhost:9990 /] attachment save --operation=/host=target-host/core-service=installer:clone-export()" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/updating_red_hat_jboss_enterprise_application_platform/managing_your_installation_manager_configurations_using_the_management_cli
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_high_availability_for_instances/proc_providing-feedback-on-red-hat-documentation
Chapter 113. AclRuleClusterResource schema reference
Chapter 113. AclRuleClusterResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleClusterResource type from AclRuleTopicResource , AclRuleGroupResource , AclRuleTransactionalIdResource . It must have the value cluster for the type AclRuleClusterResource . Property Property type Description type string Must be cluster .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-aclruleclusterresource-reference
Chapter 63. Apache CXF Provided Interceptors
Chapter 63. Apache CXF Provided Interceptors 63.1. Core Apache CXF Interceptors Inbound Table 63.1, "Core inbound interceptors" lists the core inbound interceptors that are added to all Apache CXF endpoints. Table 63.1. Core inbound interceptors Class Phase Description ServiceInvokerInterceptor INVOKE Invokes the proper method on the service. Outbound The Apache CXF does not add any core interceptors to the outbound interceptor chain by default. The contents of an endpoint's outbound interceptor chain depend on the features in use. 63.2. Front-Ends JAX-WS Table 63.2, "Inbound JAX-WS interceptors" lists the interceptors added to a JAX-WS endpoint's inbound message chain. Table 63.2. Inbound JAX-WS interceptors Class Phase Description HolderInInterceptor PRE_INVOKE Creates holder objects for any out or in/out parameters in the message. WrapperClassInInterceptor POST_LOGICAL Unwraps the parts of a wrapped doc/literal message into the appropriate array of objects. LogicalHandlerInInterceptor PRE_PROTOCOL Passes message processing to the JAX-WS logical handlers used by the endpoint. When the JAX-WS handlers complete, the message is passed along to the interceptor on the inbound chain. SOAPHandlerInterceptor PRE_PROTOCOL Passes message processing to the JAX-WS SOAP handlers used by the endpoint. When the SOAP handlers finish with the message, the message is passed along to the interceptor in the chain. Table 63.3, "Outbound JAX-WS interceptors" lists the interceptors added to a JAX-WS endpoint's outbound message chain. Table 63.3. Outbound JAX-WS interceptors Class Phase Description HolderOutInterceptor PRE_LOGICAL Removes the values of any out and in/out parameters from their holder objects and adds the values to the message's parameter list. WebFaultOutInterceptor PRE_PROTOCOL Processes outbound fault messages. WrapperClassOutInterceptor PRE_LOGICAL Makes sure that wrapped doc/literal messages and rpc/literal messages are properly wrapped before being added to the message. LogicalHandlerOutInterceptor PRE_MARSHAL Passes message processing to the JAX-WS logical handlers used by the endpoint. When the JAX-WS handlers complete, the message is passed along to the interceptor on the outbound chain. SOAPHandlerInterceptor PRE_PROTOCOL Passes message processing to the JAX-WS SOAP handlers used by the endpoint. When the SOAP handlers finish processing the message, it is passed along to the interceptor in the chain. MessageSenderInterceptor PREPARE_SEND Calls back to the Destination object to have it setup the output streams, headers, etc. to prepare the outgoing transport. JAX-RS Table 63.4, "Inbound JAX-RS interceptors" lists the interceptors added to a JAX-RS endpoint's inbound message chain. Table 63.4. Inbound JAX-RS interceptors Class Phase Description JAXRSInInterceptor PRE_STREAM Selects the root resource class, invokes any configured JAX-RS request filters, and determines the method to invoke on the root resource. Important The inbound chain for a JAX-RS endpoint skips straight to the ServiceInvokerInInterceptor interceptor. No other interceptors will be invoked after the JAXRSInInterceptor . Table 63.5, "Outbound JAX-RS interceptors" lists the interceptors added to a JAX-RS endpoint's outbound message chain. Table 63.5. Outbound JAX-RS interceptors Class Phase Description JAXRSOutInterceptor MARSHAL Marshals the response into the proper format for transmission. 63.3. Message bindings SOAP Table 63.6, "Inbound SOAP interceptors" lists the interceptors added to a endpoint's inbound message chain when using the SOAP Binding. Table 63.6. Inbound SOAP interceptors Class Phase Description CheckFaultInterceptor POST_PROTOCOL Checks if the message is a fault message. If the message is a fault message, normal processing is aborted and fault processing is started. MustUnderstandInterceptor PRE_PROTOCOL Processes the must understand headers. RPCInInterceptor UNMARSHAL Unmarshals rpc/literal messages. If the message is bare, the message is passed to a BareInInterceptor object to deserialize the message parts. ReadsHeadersInterceptor READ Parses the SOAP headers and stores them in the message object. SoapActionInInterceptor READ Parses the SOAP action header and attempts to find a unique operation for the action. SoapHeaderInterceptor UNMARSHAL Binds the SOAP headers that map to operation parameters to the appropriate objects. AttachmentInInterceptor RECEIVE Parses the mime headers for mime boundaries, finds the root part and resets the input stream to it, and stores the other parts in a collection of Attachment objects. DocLiteralInInterceptor UNMARSHAL Examines the first element in the SOAP body to determine the appropriate operation and calls the data binding to read in the data. StaxInInterceptor POST_STREAM Creates an XMLStreamReader object from the message. URIMappingInterceptor UNMARSHAL Handles the processing of HTTP GET methods. SwAInInterceptor PRE_INVOKE Creates the required MIME handlers for binary SOAP attachments and adds the data to the parameter list. Table 63.7, "Outbound SOAP interceptors" lists the interceptors added to a endpoint's outbound message chain when using the SOAP Binding. Table 63.7. Outbound SOAP interceptors Class Phase Description RPCOutInterceptor MARSHAL Marshals rpc style messages for transmission. SoapHeaderOutFilterInterceptor PRE_LOGICAL Removes all SOAP headers that are marked as inbound only. SoapPreProtocolOutInterceptor POST_LOGICAL Sets up the SOAP version and the SOAP action header. AttachmentOutInterceptor PRE_STREAM Sets up the attachment marshalers and the mime stuff required to process any attachments that might be in the message. BareOutInterceptor MARSHAL Writes the message parts. StaxOutInterceptor PRE_STREAM Creates an XMLStreamWriter object from the message. WrappedOutInterceptor MARSHAL Wraps the outbound message parameters. SoapOutInterceptor WRITE Writes the soap:envelope element and the elements for the header blocks in the message. Also writes an empty soap:body element for the remaining interceptors to populate. SwAOutInterceptor PRE_LOGICAL Removes any binary data that will be packaged as a SOAP attachment and stores it for later processing. XML Table 63.8, "Inbound XML interceptors" lists the interceptors added to a endpoint's inbound message chain when using the XML Binding. Table 63.8. Inbound XML interceptors Class Phase Description AttachmentInInterceptor RECEIVE Parses the mime headers for mime boundaries, finds the root part and resets the input stream to it, and then stores the other parts in a collection of Attachment objects. DocLiteralInInterceptor UNMARSHAL Examines the first element in the message body to determine the appropriate operation and then calls the data binding to read in the data. StaxInInterceptor POST_STREAM Creates an XMLStreamReader object from the message. URIMappingInterceptor UNMARSHAL Handles the processing of HTTP GET methods. XMLMessageInInterceptor UNMARSHAL Unmarshals the XML message. Table 63.9, "Outbound XML interceptors" lists the interceptors added to a endpoint's outbound message chain when using the XML Binding. Table 63.9. Outbound XML interceptors Class Phase Description StaxOutInterceptor PRE_STREAM Creates an XMLStreamWriter objects from the message. WrappedOutInterceptor MARSHAL Wraps the outbound message parameters. XMLMessageOutInterceptor MARSHAL Marshals the message for transmission. CORBA Table 63.10, "Inbound CORBA interceptors" lists the interceptors added to a endpoint's inbound message chain when using the CORBA Binding. Table 63.10. Inbound CORBA interceptors Class Phase Description CorbaStreamInInterceptor PRE_STREAM Deserializes the CORBA message. BareInInterceptor UNMARSHAL Deserializes the message parts. Table 63.11, "Outbound CORBA interceptors" lists the interceptors added to a endpoint's outbound message chain when using the CORBA Binding. Table 63.11. Outbound CORBA interceptors Class Phase Description CorbaStreamOutInterceptor PRE_STREAM Serializes the message. BareOutInterceptor MARSHAL Writes the message parts. CorbaStreamOutEndingInterceptor USER_STREAM Creates a streamable object for the message and stores it in the message context. 63.4. Other features Logging Table 63.12, "Inbound logging interceptors" lists the interceptors added to a endpoint's inbound message chain to support logging. Table 63.12. Inbound logging interceptors Class Phase Description LoggingInInterceptor RECEIVE Writes the raw message data to the logging system. Table 63.13, "Outbound logging interceptors" lists the interceptors added to a endpoint's outbound message chain to support logging. Table 63.13. Outbound logging interceptors Class Phase Description LoggingOutInterceptor PRE_STREAM Writes the outbound message to the logging system. For more information about logging see Chapter 19, Apache CXF Logging . WS-Addressing Table 63.14, "Inbound WS-Addressing interceptors" lists the interceptors added to a endpoint's inbound message chain when using WS-Addressing. Table 63.14. Inbound WS-Addressing interceptors Class Phase Description MAPCodec PRE_PROTOCOL Decodes the message addressing properties. Table 63.15, "Outbound WS-Addressing interceptors" lists the interceptors added to a endpoint's outbound message chain when using WS-Addressing. Table 63.15. Outbound WS-Addressing interceptors Class Phase Description MAPAggregator PRE_LOGICAL Aggregates the message addressing properties for a message. MAPCodec PRE_PROTOCOL Encodes the message addressing properties. For more information about WS-Addressing see Chapter 20, Deploying WS-Addressing . WS-RM Important WS-RM relies on WS-Addressing so all of the WS-Addressing interceptors will also be added to the interceptor chains. Table 63.16, "Inbound WS-RM interceptors" lists the interceptors added to a endpoint's inbound message chain when using WS-RM. Table 63.16. Inbound WS-RM interceptors Class Phase Description RMInInterceptor PRE_LOGICAL Handles the aggregation of message parts and acknowledgement messages. RMSoapInterceptor PRE_PROTOCOL Encodes and decodes the WS-RM properties from messages. Table 63.17, "Outbound WS-RM interceptors" lists the interceptors added to a endpoint's outbound message chain when using WS-RM. Table 63.17. Outbound WS-RM interceptors Class Phase Description RMOutInterceptor PRE_LOGICAL Handles the chunking of messages and the transmission of the chunks. Also handles the processing of acknowledgements and resend requests. RMSoapInterceptor PRE_PROTOCOL Encodes and decodes the WS-RM properties from messages. For more information about WS-RM see Chapter 21, Enabling Reliable Messaging .
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/cxfinterceptfeaturesappx
Chapter 2. The Clang compiler
Chapter 2. The Clang compiler Clang is an LLVM compiler front end for the C-based languages C, C++, Objective C/C++, OpenCL, and Cuda. LLVM Toolset is distributed with Clang 17.0.6. 2.1. Prerequisites LLVM Toolset is installed. For more information, see Installing LLVM Toolset . 2.2. Compiling a source file Use the Clang compiler to compile source files as well as assembly language source files. Clang creates an executable binary file as a result of compiling. To be able to debug your code, enable debug information by adding the -g flag to your Clang commands. Note To compile a C++ program, use clang++ instead of clang . Procedure To compile your program, run the following command: On Red Hat Enterprise Linux 8: Replace < binary_file > with the desired name of your output file and < source_file > with the name of your source file. On Red Hat Enterprise Linux 9: Replace < binary_file > with the desired name of your output file and < source_file > with the name of your source file. 2.3. Running a program The Clang compiler creates an executable binary file as a result of compiling. Complete the following steps to execute this file and run your program. Prerequisites Your program is compiled. For more information on how to compile your program, see Compiling a source file . Procedure To run your program, run in the directory containing the executable file: Replace < binary_file > with the name of your executable file. 2.4. Linking object files together By linking object files together, you can compile only source files that contain changes instead of your entire project. When you are working on a project that consists of several source files, use the Clang compiler to compile an object file for each of the source files. As a step, link those object files together. Clang automatically generates an executable file containing your linked object files. After compilation, link your object files together again. Note To compile a C++ program, use clang++ instead of clang . Procedure To compile a source file to an object file, run the following command: On Red Hat Enterprise Linux 8: Replace < object_file > with the desired name of your object file and < source_file > with the name of your source file. On Red Hat Enterprise Linux 9: Replace < object_file > with the desired name of your object file and < source_file > with the name of your source file. To link object files together, run the following command: On Red Hat Enterprise Linux 8: Replace < output_file > with the desired name of your output file and < object_file > with the names of the object files you want to link. On Red Hat Enterprise Linux 9: Replace < output_file > with the desired name of your output file and < object_file > with the names of the object files you want to link. Important At the moment, certain library features are statically linked into applications built with LLVM Toolset to support their execution on multiple versions of Red Hat Enterprise Linux. This creates a small security risk. Red Hat will issue a security erratum in case you need to rebuild your applications due to this risk. Red Hat advises to not statically link your entire application. 2.5. Additional resources For more information on the Clang compiler, see the official Clang compiler documentation . To display the manual page included in LLVM Toolset, run: Note To compile a C++ program, use clang++ instead of clang . On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9:
[ "clang -o -g < binary_file > < source_file >", "clang -o -g < binary_file > < source_file >", "./< binary_file >", "clang -o < object_file > -c < source_file >", "clang -o < object_file > -c < source_file >", "clang -o < output_file > < object_file_0 > < object_file_1 >", "clang -o < output_file > < object_file_0 > < object_file_1 >", "man clang", "man clang" ]
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_llvm_17.0.6_toolset/assembly_the-clang-compiler_using-llvm-toolset
8.4.6. Using OpenSCAP to Remediate the System
8.4.6. Using OpenSCAP to Remediate the System OpenSCAP allows to automatically remediate systems that have been found in a non-compliant state. For system remediation, an XCCDF file with instructions is required. The scap-security-guide package constains certain remediation instructions. System remediation consists of the following steps: OpenSCAP performs a regular XCCDF evaluation. An assessment of the results is performed by evaluating the OVAL definitions. Each rule that has failed is marked as a candidate for remediation. OpenSCAP searches for an appropriate fix element, resolves it, prepares the environment, and executes the fix script. Any output of the fix script is captured by OpenSCAP and stored within the rule-result element. The return value of the fix script is stored as well. Whenever OpenSCAP executes a fix script, it immediatelly evaluates the OVAL definition again (to verify that the fix script has been applied correctly). During this second run, if the OVAL evaluation returns success, the result of the rule is fixed , otherwise it is an error . Detailed results of the remediation are stored in an output XCCDF file. It contains two TestResult elements. The first TestResult element represents the scan prior to the remediation. The second TestResult is derived from the first one and contains remediation results. There are three modes of operation of OpenSCAP with regard to remediation: online, offline, and review. 8.4.6.1. OpenSCAP Online Remediation Online remediation executes fix elements at the time of scanning. Evaluation and remediation are performed as a part of a single command. To enable online remediation, use the --remediate command-line option. For example, to execute online remediation using the scap-security-guide package, run: The output of this command consists of two sections. The first section shows the result of the scan prior to the remediation, and the second section shows the result of the scan after applying the remediation. The second part can contain only fixed and error results. The fixed result indicates that the scan performed after the remediation passed. The error result indicates that even after applying the remediation, the evaluation still does not pass.
[ "~]USD oscap xccdf eval --remediate --profile xccdf_org.ssgproject.content_profile_rht-ccp --results scan-xccdf-results.xml /usr/share/xml/scap/ssg/content/ssg-rhel6-ds.xml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-using_openscap_to_remediate_the_system
Chapter 2. What is deployed with AMQ Streams
Chapter 2. What is deployed with AMQ Streams Apache Kafka components are provided for deployment to OpenShift with the AMQ Streams distribution. The Kafka components are generally run as clusters for availability. A typical deployment incorporating Kafka components might include: Kafka cluster of broker nodes ZooKeeper cluster of replicated ZooKeeper instances Kafka Connect cluster for external data connections Kafka MirrorMaker cluster to mirror the Kafka cluster in a secondary cluster Kafka Exporter to extract additional Kafka metrics data for monitoring Kafka Bridge to make HTTP-based requests to the Kafka cluster Not all of these components are mandatory, though you need Kafka and ZooKeeper as a minimum. Some components can be deployed without Kafka, such as MirrorMaker or Kafka Connect. 2.1. Order of deployment The required order of deployment to an OpenShift cluster is as follows: Deploy the Cluster operator to manage your Kafka cluster Deploy the Kafka cluster with the ZooKeeper cluster, and include the Topic Operator and User Operator in the deployment Optionally deploy: The Topic Operator and User Operator standalone if you did not deploy them with the Kafka cluster Kafka Connect Kafka MirrorMaker Kafka Bridge Components for the monitoring of metrics 2.2. Additional deployment configuration options The deployment procedures in this guide describe a deployment using the example installation YAML files provided with AMQ Streams. The procedures highlight any important configuration considerations, but they do not describe all the configuration options available. You can use custom resources to refine your deployment. You may wish to review the configuration options available for Kafka components before you deploy AMQ Streams. For more information on the configuration through custom resources, see Deployment configuration in the Using AMQ Streams on OpenShift guide. 2.2.1. Securing Kafka On deployment, the Cluster Operator automatically sets up TLS certificates for data encryption and authentication within your cluster. AMQ Streams provides additional configuration options for encryption , authentication and authorization , which are described in the Using AMQ Streams on OpenShift guide: Secure data exchange between the Kafka cluster and clients by Managing secure access to Kafka . Configure your deployment to use an authorization server to provide OAuth 2.0 authentication and OAuth 2.0 authorization . Secure Kafka using your own certificates . 2.2.2. Monitoring your deployment AMQ Streams supports additional deployment options to monitor your deployment. Extract metrics and monitor Kafka components by deploying Prometheus and Grafana with your Kafka cluster . Extract additional metrics, particularly related to monitoring consumer lag, by deploying Kafka Exporter with your Kafka cluster . Track messages end-to-end by setting up distributed tracing , as described in the Using AMQ Streams on OpenShift guide.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/deploying_and_upgrading_amq_streams_on_openshift/deploy-options_str
Chapter 6. Configuring the discovery image
Chapter 6. Configuring the discovery image The Assisted Installer uses an initial image to run an agent that performs hardware and network validations before attempting to install OpenShift Container Platform. You can use Ignition to customize the discovery image. Note Modifications to the discovery image will not persist in the system. 6.1. Creating an Ignition configuration file Ignition is a low-level system configuration utility, which is part of the temporary initial root filesystem, the initramfs . When Ignition runs on the first boot, it finds configuration data in the Ignition configuration file and applies it to the host before switch_root is called to pivot to the host's root filesystem. Ignition uses a JSON configuration specification file to represent the set of changes that occur on the first boot. Important Ignition versions newer than 3.2 are not supported, and will raise an error. Procedure Create an Ignition file and specify the configuration specification version: USD vim ~/ignition.conf { "ignition": { "version": "3.1.0" } } Add configuration data to the Ignition file. For example, add a password to the core user. Generate a password hash: USD openssl passwd -6 Add the generated password hash to the core user: { "ignition": { "version": "3.1.0" }, "passwd": { "users": [ { "name": "core", "passwordHash": "USD6USDspamUSDM5LGSMGyVD.9XOboxcwrsnwNdF4irpJdAWy.1Ry55syyUiUssIzIAHaOrUHr2zg6ruD8YNBPW9kW0H8EnKXyc1" } ] } } Save the Ignition file and export it to the IGNITION_FILE variable: USD export IGNITION_FILE=~/ignition.conf 6.2. Modifying the discovery image with Ignition Once you create an Ignition configuration file, you can modify the discovery image by patching the infrastructure environment using the Assisted Installer API. Prerequisites If you used the web console to create the cluster, you have set up the API authentication. You have an infrastructure environment and you have exported the infrastructure environment id to the INFRA_ENV_ID variable. You have a valid Ignition file and have exported the file name as USDIGNITION_FILE . Procedure Create an ignition_config_override JSON object and redirect it to a file: USD jq -n \ --arg IGNITION "USD(jq -c . USDIGNITION_FILE)" \ '{ignition_config_override: USDIGNITION}' \ > discovery_ignition.json Refresh the API token: USD source refresh-token Patch the infrastructure environment: USD curl \ --header "Authorization: Bearer USDAPI_TOKEN" \ --header "Content-Type: application/json" \ -XPATCH \ -d @discovery_ignition.json \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID | jq The ignition_config_override object references the Ignition file. Download the updated discovery image.
[ "vim ~/ignition.conf", "{ \"ignition\": { \"version\": \"3.1.0\" } }", "openssl passwd -6", "{ \"ignition\": { \"version\": \"3.1.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"passwordHash\": \"USD6USDspamUSDM5LGSMGyVD.9XOboxcwrsnwNdF4irpJdAWy.1Ry55syyUiUssIzIAHaOrUHr2zg6ruD8YNBPW9kW0H8EnKXyc1\" } ] } }", "export IGNITION_FILE=~/ignition.conf", "jq -n --arg IGNITION \"USD(jq -c . USDIGNITION_FILE)\" '{ignition_config_override: USDIGNITION}' > discovery_ignition.json", "source refresh-token", "curl --header \"Authorization: Bearer USDAPI_TOKEN\" --header \"Content-Type: application/json\" -XPATCH -d @discovery_ignition.json https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID | jq" ]
https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2025/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_configuring-the-discovery-image
Providing feedback on Red Hat JBoss Web Server documentation
Providing feedback on Red Hat JBoss Web Server documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_for_openshift/providing-direct-documentation-feedback_jws-on-openshift
Red Hat OpenShift Data Foundation architecture
Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation 4.18 Overview of OpenShift Data Foundation architecture and the roles that the components and services perform. Red Hat Storage Documentation Team Abstract This document provides an overview of the OpenShift Data Foundation architecture.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/red_hat_openshift_data_foundation_architecture/index
Chapter 80. Mail
Chapter 80. Mail Both producer and consumer are supported The Mail component provides access to Email via Spring's Mail support and the underlying JavaMail system. Note POP3 or IMAP POP3 has some limitations and end users are encouraged to use IMAP if possible. Note Using mock-mail for testing You can use a mock framework for unit testing, which allows you to test without the need for a real mail server. However you should remember to not include the mock-mail when you go into production or other environments where you need to send mails to a real mail server. Just the presence of the mock-javamail.jar on the classpath means that it will kick in and avoid sending the mails. 80.1. Dependencies When using camel-mail with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mail-starter</artifactId> </dependency> 80.2. URI format Mail endpoints can have one of the following URI formats (for the protocols, SMTP, POP3, or IMAP, respectively): The mail component also supports secure variants of these protocols (layered over SSL). You can enable the secure protocols by adding s to the scheme: 80.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 80.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 80.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 80.4. Component Options The Mail component supports 43 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean closeFolder (consumer) Whether the consumer should close the folder after polling. Setting this option to false and having disconnect=false as well, then the consumer keep the folder open between polls. true boolean copyTo (consumer) After processing a mail message, it can be copied to a mail folder with the given name. You can override this configuration value, with a header with the key copyTo, allowing you to copy messages to folder names configured at runtime. String decodeFilename (consumer) If set to true, the MimeUtility.decodeText method will be used to decode the filename. This is similar to setting JVM system property mail.mime.encodefilename. false boolean delete (consumer) Deletes the messages after they have been processed. This is done by setting the DELETED flag on the mail message. If false, the SEEN flag is set instead. As of Camel 2.10 you can override this configuration option by setting a header with the key delete to determine if the mail should be deleted or not. false boolean disconnect (consumer) Whether the consumer should disconnect after polling. If enabled this forces Camel to connect on each poll. false boolean handleFailedMessage (consumer) If the mail consumer cannot retrieve a given mail message, then this option allows to handle the caused exception by the consumer's error handler. By enable the bridge error handler on the consumer, then the Camel routing error handler can handle the exception instead. The default behavior would be the consumer throws an exception and no mails from the batch would be able to be routed by Camel. false boolean mimeDecodeHeaders (consumer) This option enables transparent MIME decoding and unfolding for mail headers. false boolean moveTo (consumer) After processing a mail message, it can be moved to a mail folder with the given name. You can override this configuration value, with a header with the key moveTo, allowing you to move messages to folder names configured at runtime. String peek (consumer) Will mark the javax.mail.Message as peeked before processing the mail message. This applies to IMAPMessage messages types only. By using peek the mail will not be eager marked as SEEN on the mail server, which allows us to rollback the mail message if there is an error processing in Camel. true boolean skipFailedMessage (consumer) If the mail consumer cannot retrieve a given mail message, then this option allows to skip the message and move on to retrieve the mail message. The default behavior would be the consumer throws an exception and no mails from the batch would be able to be routed by Camel. false boolean unseen (consumer) Whether to limit by unseen mails only. true boolean fetchSize (consumer (advanced)) Sets the maximum number of messages to consume during a poll. This can be used to avoid overloading a mail server, if a mailbox folder contains a lot of messages. Default value of -1 means no fetch size and all messages will be consumed. Setting the value to 0 is a special corner case, where Camel will not consume any messages at all. -1 int folderName (consumer (advanced)) The folder to poll. INBOX String mapMailMessage (consumer (advanced)) Specifies whether Camel should map the received mail message to Camel body/headers/attachments. If set to true, the body of the mail message is mapped to the body of the Camel IN message, the mail headers are mapped to IN headers, and the attachments to Camel IN attachment message. If this option is set to false then the IN message contains a raw javax.mail.Message. You can retrieve this raw message by calling exchange.getIn().getBody(javax.mail.Message.class). true boolean bcc (producer) Sets the BCC email address. Separate multiple email addresses with comma. String cc (producer) Sets the CC email address. Separate multiple email addresses with comma. String from (producer) The from email address. camel@localhost String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean replyTo (producer) The Reply-To recipients (the receivers of the response mail). Separate multiple email addresses with a comma. String subject (producer) The Subject of the message being sent. Note: Setting the subject in the header takes precedence over this option. String to (producer) Sets the To email address. Separate multiple email addresses with comma. String javaMailSender (producer (advanced)) To use a custom org.apache.camel.component.mail.JavaMailSender for sending emails. JavaMailSender additionalJavaMailProperties (advanced) Sets additional java mail properties, that will append/override any default properties that is set based on all the other options. This is useful if you need to add some special options but want to keep the others as is. Properties alternativeBodyHeader (advanced) Specifies the key to an IN message header that contains an alternative email body. For example, if you send emails in text/html format and want to provide an alternative mail body for non-HTML email clients, set the alternative mail body with this key as a header. CamelMailAlternativeBody String attachmentsContentTransferEncodingResolver (advanced) To use a custom AttachmentsContentTransferEncodingResolver to resolve what content-type-encoding to use for attachments. AttachmentsContentTransferEncodingResolver authenticator (advanced) The authenticator for login. If set then the password and username are ignored. Can be used for tokens which can expire and therefore must be read dynamically. MailAuthenticator autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean configuration (advanced) Sets the Mail configuration. MailConfiguration connectionTimeout (advanced) The connection timeout in milliseconds. 30000 int contentType (advanced) The mail message content type. Use text/html for HTML mails. text/plain String contentTypeResolver (advanced) Resolver to determine Content-Type for file attachments. ContentTypeResolver debugMode (advanced) Enable debug mode on the underlying mail framework. The SUN Mail framework logs the debug messages to System.out by default. false boolean ignoreUnsupportedCharset (advanced) Option to let Camel ignore unsupported charset in the local JVM when sending mails. If the charset is unsupported then charset=XXX (where XXX represents the unsupported charset) is removed from the content-type and it relies on the platform default instead. false boolean ignoreUriScheme (advanced) Option to let Camel ignore unsupported charset in the local JVM when sending mails. If the charset is unsupported then charset=XXX (where XXX represents the unsupported charset) is removed from the content-type and it relies on the platform default instead. false boolean javaMailProperties (advanced) Sets the java mail options. Will clear any default properties and only use the properties provided for this method. Properties session (advanced) Specifies the mail session that camel should use for all mail interactions. Useful in scenarios where mail sessions are created and managed by some other resource, such as a JavaEE container. When using a custom mail session, then the hostname and port from the mail session will be used (if configured on the session). Session useInlineAttachments (advanced) Whether to use disposition inline or attachment. false boolean headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy password (security) The password for login. See also setAuthenticator(MailAuthenticator). String sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean username (security) The username for login. See also setAuthenticator(MailAuthenticator). String 80.5. Endpoint Options The Mail endpoint is configured using URI syntax: with the following path and query parameters: 80.5.1. Path Parameters (2 parameters) Name Description Default Type host (common) Required The mail server host name. String port (common) The port number of the mail server. int 80.5.2. Query Parameters (66 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean closeFolder (consumer) Whether the consumer should close the folder after polling. Setting this option to false and having disconnect=false as well, then the consumer keep the folder open between polls. true boolean copyTo (consumer) After processing a mail message, it can be copied to a mail folder with the given name. You can override this configuration value, with a header with the key copyTo, allowing you to copy messages to folder names configured at runtime. String decodeFilename (consumer) If set to true, the MimeUtility.decodeText method will be used to decode the filename. This is similar to setting JVM system property mail.mime.encodefilename. false boolean delete (consumer) Deletes the messages after they have been processed. This is done by setting the DELETED flag on the mail message. If false, the SEEN flag is set instead. As of Camel 2.10 you can override this configuration option by setting a header with the key delete to determine if the mail should be deleted or not. false boolean disconnect (consumer) Whether the consumer should disconnect after polling. If enabled this forces Camel to connect on each poll. false boolean handleFailedMessage (consumer) If the mail consumer cannot retrieve a given mail message, then this option allows to handle the caused exception by the consumer's error handler. By enable the bridge error handler on the consumer, then the Camel routing error handler can handle the exception instead. The default behavior would be the consumer throws an exception and no mails from the batch would be able to be routed by Camel. false boolean maxMessagesPerPoll (consumer) Specifies the maximum number of messages to gather per poll. By default, no maximum is set. Can be used to set a limit of e.g. 1000 to avoid downloading thousands of files when the server starts up. Set a value of 0 or negative to disable this option. int mimeDecodeHeaders (consumer) This option enables transparent MIME decoding and unfolding for mail headers. false boolean moveTo (consumer) After processing a mail message, it can be moved to a mail folder with the given name. You can override this configuration value, with a header with the key moveTo, allowing you to move messages to folder names configured at runtime. String peek (consumer) Will mark the javax.mail.Message as peeked before processing the mail message. This applies to IMAPMessage messages types only. By using peek the mail will not be eager marked as SEEN on the mail server, which allows us to rollback the mail message if there is an error processing in Camel. true boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean skipFailedMessage (consumer) If the mail consumer cannot retrieve a given mail message, then this option allows to skip the message and move on to retrieve the mail message. The default behavior would be the consumer throws an exception and no mails from the batch would be able to be routed by Camel. false boolean unseen (consumer) Whether to limit by unseen mails only. true boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern fetchSize (consumer (advanced)) Sets the maximum number of messages to consume during a poll. This can be used to avoid overloading a mail server, if a mailbox folder contains a lot of messages. Default value of -1 means no fetch size and all messages will be consumed. Setting the value to 0 is a special corner case, where Camel will not consume any messages at all. -1 int folderName (consumer (advanced)) The folder to poll. INBOX String mailUidGenerator (consumer (advanced)) A pluggable MailUidGenerator that allows to use custom logic to generate UUID of the mail message. MailUidGenerator mapMailMessage (consumer (advanced)) Specifies whether Camel should map the received mail message to Camel body/headers/attachments. If set to true, the body of the mail message is mapped to the body of the Camel IN message, the mail headers are mapped to IN headers, and the attachments to Camel IN attachment message. If this option is set to false then the IN message contains a raw javax.mail.Message. You can retrieve this raw message by calling exchange.getIn().getBody(javax.mail.Message.class). true boolean pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy postProcessAction (consumer (advanced)) Refers to an MailBoxPostProcessAction for doing post processing tasks on the mailbox once the normal processing ended. MailBoxPostProcessAction bcc (producer) Sets the BCC email address. Separate multiple email addresses with comma. String cc (producer) Sets the CC email address. Separate multiple email addresses with comma. String from (producer) The from email address. camel@localhost String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean replyTo (producer) The Reply-To recipients (the receivers of the response mail). Separate multiple email addresses with a comma. String subject (producer) The Subject of the message being sent. Note: Setting the subject in the header takes precedence over this option. String to (producer) Sets the To email address. Separate multiple email addresses with comma. String javaMailSender (producer (advanced)) To use a custom org.apache.camel.component.mail.JavaMailSender for sending emails. JavaMailSender additionalJavaMailProperties (advanced) Sets additional java mail properties, that will append/override any default properties that is set based on all the other options. This is useful if you need to add some special options but want to keep the others as is. Properties alternativeBodyHeader (advanced) Specifies the key to an IN message header that contains an alternative email body. For example, if you send emails in text/html format and want to provide an alternative mail body for non-HTML email clients, set the alternative mail body with this key as a header. CamelMailAlternativeBody String attachmentsContentTransferEncodingResolver (advanced) To use a custom AttachmentsContentTransferEncodingResolver to resolve what content-type-encoding to use for attachments. AttachmentsContentTransferEncodingResolver authenticator (advanced) The authenticator for login. If set then the password and username are ignored. Can be used for tokens which can expire and therefore must be read dynamically. MailAuthenticator binding (advanced) Sets the binding used to convert from a Camel message to and from a Mail message. MailBinding connectionTimeout (advanced) The connection timeout in milliseconds. 30000 int contentType (advanced) The mail message content type. Use text/html for HTML mails. text/plain String contentTypeResolver (advanced) Resolver to determine Content-Type for file attachments. ContentTypeResolver debugMode (advanced) Enable debug mode on the underlying mail framework. The SUN Mail framework logs the debug messages to System.out by default. false boolean headerFilterStrategy (advanced) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter headers. HeaderFilterStrategy ignoreUnsupportedCharset (advanced) Option to let Camel ignore unsupported charset in the local JVM when sending mails. If the charset is unsupported then charset=XXX (where XXX represents the unsupported charset) is removed from the content-type and it relies on the platform default instead. false boolean ignoreUriScheme (advanced) Option to let Camel ignore unsupported charset in the local JVM when sending mails. If the charset is unsupported then charset=XXX (where XXX represents the unsupported charset) is removed from the content-type and it relies on the platform default instead. false boolean javaMailProperties (advanced) Sets the java mail options. Will clear any default properties and only use the properties provided for this method. Properties session (advanced) Specifies the mail session that camel should use for all mail interactions. Useful in scenarios where mail sessions are created and managed by some other resource, such as a JavaEE container. When using a custom mail session, then the hostname and port from the mail session will be used (if configured on the session). Session useInlineAttachments (advanced) Whether to use disposition inline or attachment. false boolean idempotentRepository (filter) A pluggable repository org.apache.camel.spi.IdempotentRepository which allows to cluster consuming from the same mailbox, and let the repository coordinate whether a mail message is valid for the consumer to process. By default no repository is in use. IdempotentRepository idempotentRepositoryRemoveOnCommit (filter) When using idempotent repository, then when the mail message has been successfully processed and is committed, should the message id be removed from the idempotent repository (default) or be kept in the repository. By default its assumed the message id is unique and has no value to be kept in the repository, because the mail message will be marked as seen/moved or deleted to prevent it from being consumed again. And therefore having the message id stored in the idempotent repository has little value. However this option allows to store the message id, for whatever reason you may have. true boolean searchTerm (filter) Refers to a javax.mail.search.SearchTerm which allows to filter mails based on search criteria such as subject, body, from, sent after a certain date etc. SearchTerm backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 60000 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean password (security) The password for login. See also setAuthenticator(MailAuthenticator). String sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters username (security) The username for login. See also setAuthenticator(MailAuthenticator). String sortTerm (sort) Sorting order for messages. Only natively supported for IMAP. Emulated to some degree when using POP3 or when IMAP server does not have the SORT capability. SortTerm[] 80.5.3. Sample endpoints Typically, you specify a URI with login credentials as follows (taking SMTP as an example): Alternatively, it is possible to specify both the user name and the password as query options: For example: 80.5.4. Component alias names IMAP IMAPs POP3s SMTP SMTPs 80.5.5. Default ports Default port numbers are supported. If the port number is omitted, Camel determines the port number to use based on the protocol. Protocol Default Port Number SMTP 25 SMTPS 465 POP3 110 POP3S 995 IMAP 143 IMAPS 993 80.6. SSL support The underlying mail framework is responsible for providing SSL support. You may either configure SSL/TLS support by completely specifying the necessary Java Mail API configuration options, or you may provide a configured SSLContextParameters through the component or endpoint configuration. 80.6.1. Using the JSSE Configuration Utility The mail component supports SSL/TLS configuration through the Camel JSSE Configuration Utility . This utility greatly decreases the amount of component specific code you need to write and is configurable at the endpoint and component levels. The following examples demonstrate how to use the utility with the mail component. Programmatic configuration of the endpoint KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource("/users/home/server/truststore.jks"); ksp.setPassword("keystorePassword"); TrustManagersParameters tmp = new TrustManagersParameters(); tmp.setKeyStore(ksp); SSLContextParameters scp = new SSLContextParameters(); scp.setTrustManagers(tmp); Registry registry = ... registry.bind("sslContextParameters", scp); ... from(...) .to("smtps://[email protected]&password=password&sslContextParameters=#sslContextParameters"); Spring DSL based configuration of endpoint ... <camel:sslContextParameters id="sslContextParameters"> <camel:trustManagers> <camel:keyStore resource="/users/home/server/truststore.jks" password="keystorePassword"/> </camel:trustManagers> </camel:sslContextParameters>... ... <to uri="smtps://[email protected]&password=password&sslContextParameters=#sslContextParameters"/>... 80.6.2. Configuring JavaMail Directly Camel uses Jakarta JavaMail, which only trusts certificates issued by well known Certificate Authorities (the default JVM trust configuration). If you issue your own certificates, you have to import the CA certificates into the JVM's Java trust/key store files, override the default JVM trust/key store files (see SSLNOTES.txt in JavaMail for details). 80.7. Mail Message Content Camel uses the message exchange's IN body as the MimeMessage text content. The body is converted to String.class. Camel copies all of the exchange's IN headers to the MimeMessage headers. The subject of the MimeMessage can be configured using a header property on the IN message. The code below demonstrates this: The same applies for other MimeMessage headers such as recipients, so you can use a header property as To: When using the MailProducer the send the mail to server, you should be able to get the message id of the MimeMessage with the key CamelMailMessageId from the Camel message header. 80.8. Headers take precedence over pre-configured recipients The recipients specified in the message headers always take precedence over recipients pre-configured in the endpoint URI. The idea is that if you provide any recipients in the message headers, that is what you get. The recipients pre-configured in the endpoint URI are treated as a fallback. In the sample code below, the email message is sent to [email protected] , because it takes precedence over the pre-configured recipient, [email protected] . Any CC and BCC settings in the endpoint URI are also ignored and those recipients will not receive any mail. The choice between headers and pre-configured settings is all or nothing: the mail component either takes the recipients exclusively from the headers or exclusively from the pre-configured settings. It is not possible to mix and match headers and pre-configured settings. Map<String, Object> headers = new HashMap<String, Object>(); headers.put("to", "[email protected]"); template.sendBodyAndHeaders("smtp://admin@[email protected]", "Hello World", headers); 80.9. Multiple recipients for easier configuration It is possible to set multiple recipients using a comma-separated or a semicolon-separated list. This applies both to header settings and to settings in an endpoint URI. For example: Map<String, Object> headers = new HashMap<String, Object>(); headers.put("to", "[email protected] ; [email protected] ; [email protected]"); The preceding example uses a semicolon, ; , as the separator character. 80.10. Setting sender name and email You can specify recipients in the format, name <email> , to include both the name and the email address of the recipient. For example, you define the following headers on the a Message: Map headers = new HashMap(); map.put("To", "Claus Ibsen <[email protected]>"); map.put("From", "James Strachan <[email protected]>"); map.put("Subject", "Camel is cool"); 80.11. JavaMail API (ex SUN JavaMail) JavaMail API is used under the hood for consuming and producing mails. We encourage end-users to consult these references when using either POP3 or IMAP protocol. Note particularly that POP3 has a much more limited set of features than IMAP. JavaMail POP3 API JavaMail IMAP API And generally about the MAIL Flags 80.12. Samples We start with a simple route that sends the messages received from a JMS queue as emails. The email account is the admin account on mymailserver.com . from("jms://queue:subscription").to("smtp://[email protected]?password=secret"); In the sample, we poll a mailbox for new emails once every minute. from("imap://[email protected]?password=secret&unseen=true&delay=60000") .to("seda://mails"); 80.13. Sending mail with attachment sample Note Attachments are not support by all Camel components The Attachments API is based on the Java Activation Framework and is generally only used by the Mail API. Since many of the other Camel components do not support attachments, the attachments could potentially be lost as they propagate along the route. The rule of thumb, therefore, is to add attachments just before sending a message to the mail endpoint. The mail component supports attachments. In the sample below, we send a mail message containing a plain text message with a logo file attachment. 80.14. SSL sample In this sample, we want to poll our Google mail inbox for mails. To download mail onto a local mail client, Google mail requires you to enable and configure SSL. This is done by logging into your Google mail account and changing your settings to allow IMAP access. Google have extensive documentation on how to do this. from("imaps://[email protected]&password=YOUR_PASSWORD" + "&delete=false&unseen=true&delay=60000").to("log:newmail"); The preceding route polls the Google mail inbox for new mails once every minute and logs the received messages to the newmail logger category. Running the sample with DEBUG logging enabled, we can monitor the progress in the logs: 2008-05-08 06:32:09,640 DEBUG MailConsumer - Connecting to MailStore imaps//imap.gmail.com:993 (SSL enabled), folder=INBOX 2008-05-08 06:32:11,203 DEBUG MailConsumer - Polling mailfolder: imaps//imap.gmail.com:993 (SSL enabled), folder=INBOX 2008-05-08 06:32:11,640 DEBUG MailConsumer - Fetching 1 messages. Total 1 messages. 2008-05-08 06:32:12,171 DEBUG MailConsumer - Processing message: messageNumber=[332], from=[James Bond <[email protected]>], [email protected]], subject=[... 2008-05-08 06:32:12,187 INFO newmail - Exchange[MailMessage: messageNumber=[332], from=[James Bond <[email protected]>], [email protected]], subject=[... 80.15. Consuming mails with attachment sample In this sample we poll a mailbox and store all attachments from the mails as files. First, we define a route to poll the mailbox. As this sample is based on google mail, it uses the same route as shown in the SSL sample: from("imaps://[email protected]&password=YOUR_PASSWORD" + "&delete=false&unseen=true&delay=60000").process(new MyMailProcessor()); Instead of logging the mail we use a processor where we can process the mail from java code: public void process(Exchange exchange) throws Exception { // the API is a bit clunky so we need to loop AttachmentMessage attachmentMessage = exchange.getMessage(AttachmentMessage.class); Map<String, DataHandler> attachments = attachmentMessage.getAttachments(); if (attachments.size() > 0) { for (String name : attachments.keySet()) { DataHandler dh = attachments.get(name); // get the file name String filename = dh.getName(); // get the content and convert it to byte[] byte[] data = exchange.getContext().getTypeConverter() .convertTo(byte[].class, dh.getInputStream()); // write the data to a file FileOutputStream out = new FileOutputStream(filename); out.write(data); out.flush(); out.close(); } } } As you can see the API to handle attachments is a bit clunky but it's there so you can get the javax.activation.DataHandler so you can handle the attachments using standard API. 80.16. How to split a mail message with attachments In this example we consume mail messages which may have a number of attachments. What we want to do is to use the Splitter EIP per individual attachment, to process the attachments separately. For example if the mail message has 5 attachments, we want the Splitter to process five messages, each having a single attachment. To do this we need to provide a custom Expression to the Splitter where we provide a List<Message> that contains the five messages with the single attachment. The code is provided out of the box in Camel 2.10 onwards in the camel-mail component. The code is in the class: org.apache.camel.component.mail.SplitAttachmentsExpression , which you can find in the source code here . In the Camel route you then need to use this Expression in the route as shown below: If you use XML DSL then you need to declare a method call expression in the Splitter as shown below <split> <method beanType="org.apache.camel.component.mail.SplitAttachmentsExpression"/> <to uri="mock:split"/> </split> You can also split the attachments as byte[] to be stored as the message body. This is done by creating the expression with boolean true SplitAttachmentsExpression split = SplitAttachmentsExpression(true); And then use the expression with the splitter EIP. 80.17. Using custom SearchTerm You can configure a searchTerm on the MailEndpoint which allows you to filter out unwanted mails. For example to filter mails to contain Camel in either Subject or Text you can do as follows: <route> <from uri="imaps://mymailseerver?username=foo&password=secret&searchTerm.subjectOrBody=Camel"/> <to uri="bean:myBean"/> </route> Notice we use the "searchTerm.subjectOrBody" as parameter key to indicate that we want to search on mail subject or body, to contain the word "Camel". The class org.apache.camel.component.mail.SimpleSearchTerm has a number of options you can configure: Or to get the new unseen emails going 24 hours back in time you can do. Notice the "now-24h" syntax. See the table below for more details. <route> <from uri="imaps://mymailseerver?username=foo&password=secret&searchTerm.fromSentDate=now-24h"/> <to uri="bean:myBean"/> </route> You can have multiple searchTerm in the endpoint uri configuration. They would then be combined together using AND operator, eg so both conditions must match. For example to get the last unseen emails going back 24 hours which has Camel in the mail subject you can do: <route> <from uri="imaps://mymailseerver?username=foo&password=secret&searchTerm.subject=Camel&searchTerm.fromSentDate=now-24h"/> <to uri="bean:myBean"/> </route> The SimpleSearchTerm is designed to be easily configurable from a POJO, so you can also configure it using a <bean> style in XML <bean id="mySearchTerm" class="org.apache.camel.component.mail.SimpleSearchTerm"> <property name="subject" value="Order"/> <property name="to" value="[email protected]"/> <property name="fromSentDate" value="now"/> </bean> You can then refer to this bean, using #beanId in your Camel route as shown: <route> <from uri="imaps://mymailseerver?username=foo&password=secret&searchTerm=#mySearchTerm"/> <to uri="bean:myBean"/> </route> In Java there is a builder class to build compound SearchTerms using the org.apache.camel.component.mail.SearchTermBuilder class. This allows you to build complex terms such as: // we just want the unseen mails which is not spam SearchTermBuilder builder = new SearchTermBuilder(); builder.unseen().body(Op.not, "Spam").subject(Op.not, "Spam") // which was sent from either foo or bar .from("[email protected]").from(Op.or, "[email protected]"); // .. and we could continue building the terms SearchTerm term = builder.build(); 80.18. Polling Optimization The parameter maxMessagePerPoll and fetchSize allow you to restrict the number message that should be processed for each poll. These parameters should help to prevent bad performance when working with folders that contain a lot of messages. In versions these parameters have been evaluated too late, so that big mailboxes could still cause performance problems. With Camel 3.1 these parameters are evaluated earlier during the poll to avoid these problems. 80.19. Using headers with additional Java Mail Sender properties When sending mails, then you can provide dynamic java mail properties for the JavaMailSender from the Exchange as message headers with keys starting with java.smtp. . You can set any of the java.smtp properties which you can find in the Java Mail documentation. For example to provide a dynamic uuid in java.smtp.from (SMTP MAIL command): .setHeader("from", constant("[email protected]")); .setHeader("java.smtp.from", method(UUID.class, "randomUUID")); .to("smtp://mymailserver:1234"); Note This is only supported when not using a custom JavaMailSender . 80.20. Spring Boot Auto-Configuration The component supports 50 options, which are listed below. Name Description Default Type camel.component.mail.additional-java-mail-properties Sets additional java mail properties, that will append/override any default properties that is set based on all the other options. This is useful if you need to add some special options but want to keep the others as is. The option is a java.util.Properties type. Properties camel.component.mail.alternative-body-header Specifies the key to an IN message header that contains an alternative email body. For example, if you send emails in text/html format and want to provide an alternative mail body for non-HTML email clients, set the alternative mail body with this key as a header. CamelMailAlternativeBody String camel.component.mail.attachments-content-transfer-encoding-resolver To use a custom AttachmentsContentTransferEncodingResolver to resolve what content-type-encoding to use for attachments. The option is a org.apache.camel.component.mail.AttachmentsContentTransferEncodingResolver type. AttachmentsContentTransferEncodingResolver camel.component.mail.authenticator The authenticator for login. If set then the password and username are ignored. Can be used for tokens which can expire and therefore must be read dynamically. The option is a org.apache.camel.component.mail.MailAuthenticator type. MailAuthenticator camel.component.mail.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mail.bcc Sets the BCC email address. Separate multiple email addresses with comma. String camel.component.mail.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.mail.cc Sets the CC email address. Separate multiple email addresses with comma. String camel.component.mail.close-folder Whether the consumer should close the folder after polling. Setting this option to false and having disconnect=false as well, then the consumer keep the folder open between polls. true Boolean camel.component.mail.configuration Sets the Mail configuration. The option is a org.apache.camel.component.mail.MailConfiguration type. MailConfiguration camel.component.mail.connection-timeout The connection timeout in milliseconds. 30000 Integer camel.component.mail.content-type The mail message content type. Use text/html for HTML mails. text/plain String camel.component.mail.content-type-resolver Resolver to determine Content-Type for file attachments. The option is a org.apache.camel.component.mail.ContentTypeResolver type. ContentTypeResolver camel.component.mail.copy-to After processing a mail message, it can be copied to a mail folder with the given name. You can override this configuration value, with a header with the key copyTo, allowing you to copy messages to folder names configured at runtime. String camel.component.mail.debug-mode Enable debug mode on the underlying mail framework. The SUN Mail framework logs the debug messages to System.out by default. false Boolean camel.component.mail.decode-filename If set to true, the MimeUtility.decodeText method will be used to decode the filename. This is similar to setting JVM system property mail.mime.encodefilename. false Boolean camel.component.mail.delete Deletes the messages after they have been processed. This is done by setting the DELETED flag on the mail message. If false, the SEEN flag is set instead. As of Camel 2.10 you can override this configuration option by setting a header with the key delete to determine if the mail should be deleted or not. false Boolean camel.component.mail.disconnect Whether the consumer should disconnect after polling. If enabled this forces Camel to connect on each poll. false Boolean camel.component.mail.enabled Whether to enable auto configuration of the mail component. This is enabled by default. Boolean camel.component.mail.fetch-size Sets the maximum number of messages to consume during a poll. This can be used to avoid overloading a mail server, if a mailbox folder contains a lot of messages. Default value of -1 means no fetch size and all messages will be consumed. Setting the value to 0 is a special corner case, where Camel will not consume any messages at all. -1 Integer camel.component.mail.folder-name The folder to poll. INBOX String camel.component.mail.from The from email address. camel@localhost String camel.component.mail.handle-failed-message If the mail consumer cannot retrieve a given mail message, then this option allows to handle the caused exception by the consumer's error handler. By enable the bridge error handler on the consumer, then the Camel routing error handler can handle the exception instead. The default behavior would be the consumer throws an exception and no mails from the batch would be able to be routed by Camel. false Boolean camel.component.mail.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.mail.ignore-unsupported-charset Option to let Camel ignore unsupported charset in the local JVM when sending mails. If the charset is unsupported then charset=XXX (where XXX represents the unsupported charset) is removed from the content-type and it relies on the platform default instead. false Boolean camel.component.mail.ignore-uri-scheme Option to let Camel ignore unsupported charset in the local JVM when sending mails. If the charset is unsupported then charset=XXX (where XXX represents the unsupported charset) is removed from the content-type and it relies on the platform default instead. false Boolean camel.component.mail.java-mail-properties Sets the java mail options. Will clear any default properties and only use the properties provided for this method. The option is a java.util.Properties type. Properties camel.component.mail.java-mail-sender To use a custom org.apache.camel.component.mail.JavaMailSender for sending emails. The option is a org.apache.camel.component.mail.JavaMailSender type. JavaMailSender camel.component.mail.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mail.map-mail-message Specifies whether Camel should map the received mail message to Camel body/headers/attachments. If set to true, the body of the mail message is mapped to the body of the Camel IN message, the mail headers are mapped to IN headers, and the attachments to Camel IN attachment message. If this option is set to false then the IN message contains a raw javax.mail.Message. You can retrieve this raw message by calling exchange.getIn().getBody(javax.mail.Message.class). true Boolean camel.component.mail.mime-decode-headers This option enables transparent MIME decoding and unfolding for mail headers. false Boolean camel.component.mail.move-to After processing a mail message, it can be moved to a mail folder with the given name. You can override this configuration value, with a header with the key moveTo, allowing you to move messages to folder names configured at runtime. String camel.component.mail.password The password for login. See also setAuthenticator(MailAuthenticator). String camel.component.mail.peek Will mark the javax.mail.Message as peeked before processing the mail message. This applies to IMAPMessage messages types only. By using peek the mail will not be eager marked as SEEN on the mail server, which allows us to rollback the mail message if there is an error processing in Camel. true Boolean camel.component.mail.reply-to The Reply-To recipients (the receivers of the response mail). Separate multiple email addresses with a comma. String camel.component.mail.session Specifies the mail session that camel should use for all mail interactions. Useful in scenarios where mail sessions are created and managed by some other resource, such as a JavaEE container. When using a custom mail session, then the hostname and port from the mail session will be used (if configured on the session). The option is a javax.mail.Session type. Session camel.component.mail.skip-failed-message If the mail consumer cannot retrieve a given mail message, then this option allows to skip the message and move on to retrieve the mail message. The default behavior would be the consumer throws an exception and no mails from the batch would be able to be routed by Camel. false Boolean camel.component.mail.ssl-context-parameters To configure security using SSLContextParameters. The option is a org.apache.camel.support.jsse.SSLContextParameters type. SSLContextParameters camel.component.mail.subject The Subject of the message being sent. Note: Setting the subject in the header takes precedence over this option. String camel.component.mail.to Sets the To email address. Separate multiple email addresses with comma. String camel.component.mail.unseen Whether to limit by unseen mails only. true Boolean camel.component.mail.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.mail.use-inline-attachments Whether to use disposition inline or attachment. false Boolean camel.component.mail.username The username for login. See also setAuthenticator(MailAuthenticator). String camel.dataformat.mime-multipart.binary-content Defines whether the content of binary parts in the MIME multipart is binary (true) or Base-64 encoded (false) Default is false. false Boolean camel.dataformat.mime-multipart.enabled Whether to enable auto configuration of the mime-multipart data format. This is enabled by default. Boolean camel.dataformat.mime-multipart.headers-inline Defines whether the MIME-Multipart headers are part of the message body (true) or are set as Camel headers (false). Default is false. false Boolean camel.dataformat.mime-multipart.include-headers A regex that defines which Camel headers are also included as MIME headers into the MIME multipart. This will only work if headersInline is set to true. Default is to include no headers. String camel.dataformat.mime-multipart.multipart-sub-type Specify the subtype of the MIME Multipart. Default is mixed. mixed String camel.dataformat.mime-multipart.multipart-without-attachment Defines whether a message without attachment is also marshaled into a MIME Multipart (with only one body part). Default is false. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mail-starter</artifactId> </dependency>", "smtp://[username@]host[:port][?options] pop3://[username@]host[:port][?options] imap://[username@]host[:port][?options]", "smtps://[username@]host[:port][?options] pop3s://[username@]host[:port][?options] imaps://[username@]host[:port][?options]", "imap:host:port", "smtp://[username@]host[:port][?password=somepwd]", "smtp://host[:port]?password=somepwd&username=someuser", "smtp://mycompany.mailserver:30?password=tiger&username=scott", "KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource(\"/users/home/server/truststore.jks\"); ksp.setPassword(\"keystorePassword\"); TrustManagersParameters tmp = new TrustManagersParameters(); tmp.setKeyStore(ksp); SSLContextParameters scp = new SSLContextParameters(); scp.setTrustManagers(tmp); Registry registry = registry.bind(\"sslContextParameters\", scp); from(...) .to(\"smtps://[email protected]&password=password&sslContextParameters=#sslContextParameters\");", "<camel:sslContextParameters id=\"sslContextParameters\"> <camel:trustManagers> <camel:keyStore resource=\"/users/home/server/truststore.jks\" password=\"keystorePassword\"/> </camel:trustManagers> </camel:sslContextParameters> <to uri=\"smtps://[email protected]&password=password&sslContextParameters=#sslContextParameters\"/>", "Map<String, Object> headers = new HashMap<String, Object>(); headers.put(\"to\", \"[email protected]\"); template.sendBodyAndHeaders(\"smtp://admin@[email protected]\", \"Hello World\", headers);", "Map<String, Object> headers = new HashMap<String, Object>(); headers.put(\"to\", \"[email protected] ; [email protected] ; [email protected]\");", "Map headers = new HashMap(); map.put(\"To\", \"Claus Ibsen <[email protected]>\"); map.put(\"From\", \"James Strachan <[email protected]>\"); map.put(\"Subject\", \"Camel is cool\");", "from(\"jms://queue:subscription\").to(\"smtp://[email protected]?password=secret\");", "from(\"imap://[email protected]?password=secret&unseen=true&delay=60000\") .to(\"seda://mails\");", "from(\"imaps://[email protected]&password=YOUR_PASSWORD\" + \"&delete=false&unseen=true&delay=60000\").to(\"log:newmail\");", "2008-05-08 06:32:09,640 DEBUG MailConsumer - Connecting to MailStore imaps//imap.gmail.com:993 (SSL enabled), folder=INBOX 2008-05-08 06:32:11,203 DEBUG MailConsumer - Polling mailfolder: imaps//imap.gmail.com:993 (SSL enabled), folder=INBOX 2008-05-08 06:32:11,640 DEBUG MailConsumer - Fetching 1 messages. Total 1 messages. 2008-05-08 06:32:12,171 DEBUG MailConsumer - Processing message: messageNumber=[332], from=[James Bond <[email protected]>], [email protected]], subject=[ 2008-05-08 06:32:12,187 INFO newmail - Exchange[MailMessage: messageNumber=[332], from=[James Bond <[email protected]>], [email protected]], subject=[", "from(\"imaps://[email protected]&password=YOUR_PASSWORD\" + \"&delete=false&unseen=true&delay=60000\").process(new MyMailProcessor());", "public void process(Exchange exchange) throws Exception { // the API is a bit clunky so we need to loop AttachmentMessage attachmentMessage = exchange.getMessage(AttachmentMessage.class); Map<String, DataHandler> attachments = attachmentMessage.getAttachments(); if (attachments.size() > 0) { for (String name : attachments.keySet()) { DataHandler dh = attachments.get(name); // get the file name String filename = dh.getName(); // get the content and convert it to byte[] byte[] data = exchange.getContext().getTypeConverter() .convertTo(byte[].class, dh.getInputStream()); // write the data to a file FileOutputStream out = new FileOutputStream(filename); out.write(data); out.flush(); out.close(); } } }", "<split> <method beanType=\"org.apache.camel.component.mail.SplitAttachmentsExpression\"/> <to uri=\"mock:split\"/> </split>", "SplitAttachmentsExpression split = SplitAttachmentsExpression(true);", "<route> <from uri=\"imaps://mymailseerver?username=foo&password=secret&searchTerm.subjectOrBody=Camel\"/> <to uri=\"bean:myBean\"/> </route>", "<route> <from uri=\"imaps://mymailseerver?username=foo&password=secret&searchTerm.fromSentDate=now-24h\"/> <to uri=\"bean:myBean\"/> </route>", "<route> <from uri=\"imaps://mymailseerver?username=foo&password=secret&searchTerm.subject=Camel&searchTerm.fromSentDate=now-24h\"/> <to uri=\"bean:myBean\"/> </route>", "<bean id=\"mySearchTerm\" class=\"org.apache.camel.component.mail.SimpleSearchTerm\"> <property name=\"subject\" value=\"Order\"/> <property name=\"to\" value=\"[email protected]\"/> <property name=\"fromSentDate\" value=\"now\"/> </bean>", "<route> <from uri=\"imaps://mymailseerver?username=foo&password=secret&searchTerm=#mySearchTerm\"/> <to uri=\"bean:myBean\"/> </route>", "// we just want the unseen mails which is not spam SearchTermBuilder builder = new SearchTermBuilder(); builder.unseen().body(Op.not, \"Spam\").subject(Op.not, \"Spam\") // which was sent from either foo or bar .from(\"[email protected]\").from(Op.or, \"[email protected]\"); // .. and we could continue building the terms SearchTerm term = builder.build();", ".setHeader(\"from\", constant(\"[email protected]\")); .setHeader(\"java.smtp.from\", method(UUID.class, \"randomUUID\")); .to(\"smtp://mymailserver:1234\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-mail-component-starter
Chapter 11. Optimizing networking
Chapter 11. Optimizing networking The OpenShift SDN uses OpenvSwitch, virtual extensible LAN (VXLAN) tunnels, OpenFlow rules, and iptables. This network can be tuned by using jumbo frames, network interface controllers (NIC) offloads, multi-queue, and ethtool settings. OVN-Kubernetes uses Geneve (Generic Network Virtualization Encapsulation) instead of VXLAN as the tunnel protocol. VXLAN provides benefits over VLANs, such as an increase in networks from 4096 to over 16 million, and layer 2 connectivity across physical networks. This allows for all pods behind a service to communicate with each other, even if they are running on different systems. VXLAN encapsulates all tunneled traffic in user datagram protocol (UDP) packets. However, this leads to increased CPU utilization. Both these outer- and inner-packets are subject to normal checksumming rules to guarantee data is not corrupted during transit. Depending on CPU performance, this additional processing overhead can cause a reduction in throughput and increased latency when compared to traditional, non-overlay networks. Cloud, VM, and bare metal CPU performance can be capable of handling much more than one Gbps network throughput. When using higher bandwidth links such as 10 or 40 Gbps, reduced performance can occur. This is a known issue in VXLAN-based environments and is not specific to containers or OpenShift Container Platform. Any network that relies on VXLAN tunnels will perform similarly because of the VXLAN implementation. If you are looking to push beyond one Gbps, you can: Evaluate network plugins that implement different routing techniques, such as border gateway protocol (BGP). Use VXLAN-offload capable network adapters. VXLAN-offload moves the packet checksum calculation and associated CPU overhead off of the system CPU and onto dedicated hardware on the network adapter. This frees up CPU cycles for use by pods and applications, and allows users to utilize the full bandwidth of their network infrastructure. VXLAN-offload does not reduce latency. However, CPU utilization is reduced even in latency tests. 11.1. Optimizing the MTU for your network There are two important maximum transmission units (MTUs): the network interface controller (NIC) MTU and the cluster network MTU. The NIC MTU is only configured at the time of OpenShift Container Platform installation. The MTU must be less than or equal to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value. If you are optimizing for lowest latency, choose a lower value. The OpenShift SDN network plugin overlay MTU must be less than the NIC MTU by 50 bytes at a minimum. This accounts for the SDN overlay header. So, on a normal ethernet network, this should be set to 1450 . On a jumbo frame ethernet network, this should be set to 8950 . These values should be set automatically by the Cluster Network Operator based on the NIC's configured MTU. Therefore, cluster administrators do not typically update these values. Amazon Web Services (AWS) and bare-metal environments support jumbo frame ethernet networks. This setting will help throughput, especially with transmission control protocol (TCP). For OVN and Geneve, the MTU must be less than the NIC MTU by 100 bytes at a minimum. Note This 50 byte overlay header is relevant to the OpenShift SDN network plugin. Other SDN solutions might require the value to be more or less. 11.2. Recommended practices for installing large scale clusters When installing large clusters or scaling the cluster to larger node counts, set the cluster network cidr accordingly in your install-config.yaml file before you install the cluster: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 The default cluster network cidr 10.128.0.0/14 cannot be used if the cluster size is more than 500 nodes. It must be set to 10.128.0.0/12 or 10.128.0.0/10 to get to larger node counts beyond 500 nodes. 11.3. Impact of IPsec Because encrypting and decrypting node hosts uses CPU power, performance is affected both in throughput and CPU usage on the nodes when encryption is enabled, regardless of the IP security system being used. IPSec encrypts traffic at the IP payload level, before it hits the NIC, protecting fields that would otherwise be used for NIC offloading. This means that some NIC acceleration features might not be usable when IPSec is enabled and will lead to decreased throughput and increased CPU usage. 11.4. Additional resources Modifying advanced network configuration parameters Configuration parameters for the OVN-Kubernetes default CNI network provider Configuration parameters for the OpenShift SDN default CNI network provider Improving cluster stability in high latency environments using worker latency profiles
[ "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/scalability_and_performance/optimizing-networking
6.6. Checking Client Operating Versions
6.6. Checking Client Operating Versions Different versions of Red Hat Gluster Storage support different features. Servers and clients identify the features that they are capable of supporting using an operating version number, or op-version . The cluster.op-version parameter sets the required operating version for all volumes in a cluster on the server side. Each client supports a range of operating versions that are identified by a minimum ( min-op-version ) and maximum ( max-op-version ) supported operating version. Check the operating versions of the clients connected to a given volume by running the following command: For Red Hat Gluster 3.2 and later Use all in place of the name of your volume if you want to see the operating versions of clients connected to all volumes in the cluster. Before Red Hat Gluster Storage 3.2: Perform a state dump for the volume whose clients you want to check. Locate the state dump directory Locate the state dump file and grep for client information.
[ "gluster volume status volname clients", "gluster volume statedump volname", "gluster --print-statedumpdir", "grep -A4 \"identifier= client_ip \" statedumpfile" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/check-op-version
3.11. The GFS2 Withdraw Function
3.11. The GFS2 Withdraw Function The GFS2 withdraw function is a data integrity feature of the GFS2 file system that prevents potential file system damage due to faulty hardware or kernel software. If the GFS2 kernel module detects an inconsistency while using a GFS2 file system on any given cluster node, it withdraws from the file system, leaving it unavailable to that node until it is unmounted and remounted (or the machine detecting the problem is rebooted). All other mounted GFS2 file systems remain fully functional on that node. (The GFS2 withdraw function is less severe than a kernel panic, which causes the node to be fenced.) The main categories of inconsistency that can cause a GFS2 withdraw are as follows: Inode consistency error Resource group consistency error Journal consistency error Magic number metadata consistency error Metadata type consistency error An example of an inconsistency that would cause a GFS2 withdraw is an incorrect block count for a file's inode. When GFS2 deletes a file, it systematically removes all the data and metadata blocks referenced by that file. When done, it checks the inode's block count. If the block count is not 1 (meaning all that is left is the disk inode itself), that indicates a file system inconsistency, since the inode's block count did not match the actual blocks used for the file. In many cases, the problem may have been caused by faulty hardware (faulty memory, motherboard, HBA, disk drives, cables, and so forth). It may also have been caused by a kernel bug (another kernel module accidentally overwriting GFS2's memory), or actual file system damage (caused by a GFS2 bug). In most cases, the GFS2 inconsistency is fixed by rebooting the cluster node. Before rebooting the cluster node, disable the GFS2 file system "clone" service from Pacemaker, which unmounts the file system on that node only. Warning Do not try to unmount and remount the file system manually with the umount and mount commands. You must use the pcs command, otherwise Pacemaker will detect the file system service has disappeared and fence the node. The consistency problem that caused the withdraw may make stopping the file system service impossible as it may cause the system to hang. If the problem persists after a remount, you should stop the file system service to unmount the file system from all nodes in the cluster, then perform a file system check with the fsck.gfs2 command before restarting the service with the following procedure. Reboot the affected node. Disable the non-clone file system service in Pacemaker to unmount the file system from every node in the cluster. From one node of the cluster, run the fsck.gfs2 command on the file system device to check for and repair any file system damage. Remount the GFS2 file system from all nodes by re-enabling the file system service: You can override the GFS2 withdraw function by mounting the file system with the -o errors=panic option specified in the file system service. When this option is specified, any errors that would normally cause the system to withdraw force a kernel panic instead. This stops the node's communications, which causes the node to be fenced. This is especially useful for clusters that are left unattended for long periods of time without monitoring or intervention. Internally, the GFS2 withdraw function works by disconnecting the locking protocol to ensure that all further file system operations result in I/O errors. As a result, when the withdraw occurs, it is normal to see a number of I/O errors from the device mapper device reported in the system logs.
[ "pcs resource disable --wait=100 mydata_fs_clone /sbin/reboot", "pcs resource disable --wait=100 mydata_fs", "fsck.gfs2 -y /dev/vg_mydata/mydata > /tmp/fsck.out", "pcs resource enable --wait=100 mydata_fs", "pcs resource update mydata_fs \"options=noatime,errors=panic\"" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/s1-manage-gfs2withdraw
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/basic_authentication/making-open-source-more-inclusive
14.3.7. Viewing an SSH CA Certificate
14.3.7. Viewing an SSH CA Certificate To view a certificate, use the -L to list the contents. For example, for a user's certificate: To vew a host certificate:
[ "~]USD ssh-keygen -L -f ~/.ssh/id_rsa-cert.pub /home/user1/.ssh/id_rsa-cert.pub: Type: [email protected] user certificate Public key: RSA-CERT 3c:9d:42:ed:65:b6:0f:18:bf:52:77:c6:02:0e:e5:86 Signing CA: RSA b1:8e:0b:ce:fe:1b:67:59:f1:74:cd:32:af:5f:c6:e8 Key ID: \"user1\" Serial: 0 Valid: from 2015-05-27T00:09:16 to 2016-06-09T00:09:16 Principals: user1 Critical Options: (none) Extensions: permit-X11-forwarding permit-agent-forwarding permit-port-forwarding permit-pty permit-user-rc", "~]# ssh-keygen -L -f /etc/ssh/ssh_host_rsa_key-cert.pub /etc/ssh/ssh_host_rsa_key-cert.pub: Type: [email protected] host certificate Public key: RSA-CERT 1d:71:61:50:05:9b:ec:64:34:27:a5:cc:67:24:03:23 Signing CA: RSA e4:d5:d1:4f:6b:fd:a2:e3:4e:5a:73:52:91:0b:b7:7a Key ID: \"host_name\" Serial: 0 Valid: from 2015-05-26T17:19:01 to 2016-06-08T17:19:01 Principals: host_name.example.com Critical Options: (none) Extensions: (none)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-viewing_an_ssh_ca_certificate
Chapter 6. Creating alerts in Datadog
Chapter 6. Creating alerts in Datadog Administrators can create monitors that track the metrics of the Red Hat Ceph Storage cluster and generate alerts. For example, if an OSD is down, Datadog can alert an administrator that one or more OSDs are down. Prerequisites Root-level access to the Ceph Monitor node. Appropriate Ceph key providing access to the Red Hat Ceph Storage cluster. Internet access. Procedure Click Monitors to see an overview of the Datadog monitors. To create a monitor, select Monitors->New Monitor . Select the detection method. For example, "Threshold Alert." Define the metric. To create an advanced alert, click on the Advanced... link. Then, select a metric from the combo box. For example, select the ceph.num_in_osds Ceph metric. Click Add Query+ to add another query. Select another metric from the combo box. For example, select the ceph.num_up_osds Ceph metric. In the Express these queries as: field, enter a-b , where a is the value of ceph.num_in_osds and b is the value of ceph.num_up_osds . When the difference is 1 or greater, there is at least one OSD down. Set the alert conditions. For example, set the trigger to be above or equal to , the threshold to in total and the time elapsed to 1 minute . Set the Alert threshold field to 1 . When at least one OSD is in the cluster and it is not up and running, the monitor will alert the user. Give the monitor a title in the input field below Preview and Edit . This is required to save the monitor. Enter a description of the alert in the text field. Note The text field supports metric variables and Markdown syntax. Add the recipients of the alert. This will add an email address to the text field. When the alert gets triggered, the recipients will receive the alert.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/monitoring_ceph_with_datadog_guide/procedure-module-name-with-dashes_datadog
35.3. Configuring the Certificate Server Component
35.3. Configuring the Certificate Server Component To configure Certificate Server (CS) manually, open the /etc/pki/pki-tomcat/server.xml file. Set all occurrences of the sslVersionRangeStream and sslVersionRangeDatagram parameters to the following values: Alternatively, use the following command to replace the values for you: Restart CS:
[ "sslVersionRangeStream=\"tls1_2:tls1_2\" sslVersionRangeDatagram=\"tls1_2:tls1_2\"", "sed -i 's/tls1_[01]:tls1_2/tls1_2:tls1_2/g' /etc/pki/pki-tomcat/server.xml", "systemctl restart [email protected]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/configure-tls-cs
Chapter 6. MachineConfig [machineconfiguration.openshift.io/v1]
Chapter 6. MachineConfig [machineconfiguration.openshift.io/v1] Description MachineConfig defines the configuration for a machine Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object MachineConfigSpec is the spec for MachineConfig 6.1.1. .spec Description MachineConfigSpec is the spec for MachineConfig Type object Property Type Description baseOSExtensionsContainerImage string BaseOSExtensionsContainerImage specifies the remote location that will be used to fetch the extensions container matching a new-format OS image config `` Config is a Ignition Config object. extensions array (string) extensions contains a list of additional features that can be enabled on host fips boolean fips controls FIPS mode kernelArguments `` kernelArguments contains a list of kernel arguments to be added kernelType string kernelType contains which kernel we want to be running like default (traditional), realtime, 64k-pages (aarch64 only). osImageURL string OSImageURL specifies the remote location that will be used to fetch the OS. 6.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/machineconfigs DELETE : delete collection of MachineConfig GET : list objects of kind MachineConfig POST : create a MachineConfig /apis/machineconfiguration.openshift.io/v1/machineconfigs/{name} DELETE : delete a MachineConfig GET : read the specified MachineConfig PATCH : partially update the specified MachineConfig PUT : replace the specified MachineConfig 6.2.1. /apis/machineconfiguration.openshift.io/v1/machineconfigs HTTP method DELETE Description delete collection of MachineConfig Table 6.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineConfig Table 6.2. HTTP responses HTTP code Reponse body 200 - OK MachineConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineConfig Table 6.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.4. Body parameters Parameter Type Description body MachineConfig schema Table 6.5. HTTP responses HTTP code Reponse body 200 - OK MachineConfig schema 201 - Created MachineConfig schema 202 - Accepted MachineConfig schema 401 - Unauthorized Empty 6.2.2. /apis/machineconfiguration.openshift.io/v1/machineconfigs/{name} Table 6.6. Global path parameters Parameter Type Description name string name of the MachineConfig HTTP method DELETE Description delete a MachineConfig Table 6.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineConfig Table 6.9. HTTP responses HTTP code Reponse body 200 - OK MachineConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineConfig Table 6.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK MachineConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineConfig Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.13. Body parameters Parameter Type Description body MachineConfig schema Table 6.14. HTTP responses HTTP code Reponse body 200 - OK MachineConfig schema 201 - Created MachineConfig schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/machine_apis/machineconfig-machineconfiguration-openshift-io-v1
Chapter 8. Installation configuration parameters for IBM Z and IBM LinuxONE
Chapter 8. Installation configuration parameters for IBM Z and IBM LinuxONE Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 8.1. Available installation configuration parameters for IBM Z The following tables specify the required, optional, and IBM Z-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 8.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 8.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 8.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 8.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 8.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 8.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content.
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_z_and_ibm_linuxone/installation-config-parameters-ibm-z
Chapter 3. Installing and deploying OpenShift AI
Chapter 3. Installing and deploying OpenShift AI Red Hat OpenShift AI is a platform for data scientists and developers of artificial intelligence (AI) applications. It provides a fully supported environment that lets you rapidly develop, train, test, and deploy machine learning models on-premises and/or in the public cloud. OpenShift AI is provided as a managed cloud service add-on for Red Hat OpenShift or as self-managed software that you can install on-premise or in the public cloud on OpenShift. For information about installing OpenShift AI as self-managed software on your OpenShift cluster in a disconnected environment, see Installing and uninstalling OpenShift AI Self-Managed in a disconnected environment . For information about installing OpenShift AI as a managed cloud service add-on, see Installing and uninstalling OpenShift AI Cloud Service . Installing OpenShift AI involves the following high-level tasks: Confirm that your OpenShift cluster meets all requirements. See Requirements for OpenShift AI Self-Managed . Add administrative users for OpenShift. See Adding administrative users in OpenShift . Install the Red Hat OpenShift AI Operator. See Installing the Red Hat OpenShift AI Operator . Install OpenShift AI components. See Installing and managing Red Hat OpenShift AI components . Configure user and administrator groups to provide user access to OpenShift AI. See Adding users to OpenShift AI user groups . Access the OpenShift AI dashboard. See Accessing the OpenShift AI dashboard . Optionally, configure and enable your accelerators in OpenShift AI to ensure that your data scientists can use compute-heavy workloads in their models. See Enabling accelerators . 3.1. Requirements for OpenShift AI Self-Managed You must meet the following requirements before you can install Red Hat OpenShift AI on your Red Hat OpenShift cluster: Product subscriptions You must have a subscription for Red Hat OpenShift AI Self-Managed. If you want to install OpenShift AI Self-Managed in a Red Hat-managed cloud environment, you must have a subscription for one of the following platforms: Red Hat OpenShift Dedicated on Amazon Web Services (AWS) or Google Cloud Platform (GCP) Red Hat OpenShift Service on Amazon Web Services (ROSA Classic) Red Hat OpenShift Service on Amazon Web Services with hosted control planes (ROSA HCP) Microsoft Azure Red Hat OpenShift Contact your Red Hat account manager to purchase new subscriptions. If you do not yet have an account manager, complete the form at https://www.redhat.com/en/contact to request one. Cluster administrator access to your OpenShift cluster You must have an OpenShift cluster with cluster administrator access. Use an existing cluster, or create a cluster by following the steps in the relevant documentation: OpenShift Container Platform 4.14 or later: OpenShift Container Platform installation overview OpenShift Dedicated: Creating an OpenShift Dedicated cluster ROSA Classic: Install ROSA Classic clusters ROSA HCP: Install ROSA with HCP clusters Your cluster must have at least 2 worker nodes with at least 8 CPUs and 32 GiB RAM available for OpenShift AI to use when you install the Operator. To ensure that OpenShift AI is usable, additional cluster resources are required beyond the minimum requirements. To use OpenShift AI on single node OpenShift, the node has to have at least 32 CPUs and 128 GiB RAM. Your cluster is configured with a default storage class that can be dynamically provisioned. Confirm that a default storage class is configured by running the oc get storageclass command. If no storage classes are noted with (default) beside the name, follow the OpenShift Container Platform documentation to configure a default storage class: Changing the default storage class . For more information about dynamic provisioning, see Dynamic provisioning . Open Data Hub must not be installed on the cluster. For more information about managing the machines that make up an OpenShift cluster, see Overview of machine management . An identity provider configured for OpenShift Red Hat OpenShift AI uses the same authentication systems as Red Hat OpenShift Container Platform. See Understanding identity provider configuration for more information on configuring identity providers. Access to the cluster as a user with the cluster-admin role; the kubeadmin user is not allowed. Internet access Along with Internet access, the following domains must be accessible during the installation of OpenShift AI Self-Managed: cdn.redhat.com subscription.rhn.redhat.com registry.access.redhat.com registry.redhat.io quay.io For CUDA-based images, the following domains must be accessible: ngc.download.nvidia.cn developer.download.nvidia.com Create custom namespaces By default, OpenShift AI uses predefined namespaces, but you can define a custom namespace for the operator and DSCI.applicationNamespace as needed. Namespaces created by OpenShift AI typically include openshift or redhat in their name. Do not rename these system namespaces because they are required for OpenShift AI to function properly. If you are using custom namespaces, before installing the OpenShift AI Operator, you must have created and labeled them as required. Data science pipelines preparation Data science pipelines 2.0 contains an installation of Argo Workflows. If there is an existing installation of Argo Workflows that is not installed by data science pipelines on your cluster, data science pipelines will be disabled after you install OpenShift AI. Before installing OpenShift AI, ensure that your cluster does not have an existing installation of Argo Workflows that is not installed by data science pipelines, or remove the separate installation of Argo Workflows from your cluster. You can store your pipeline artifacts in an S3-compatible object storage bucket so that you do not consume local storage. To do this, you must first configure write access to your S3 bucket on your storage account. Install KServe dependencies To support the KServe component, which is used by the single-model serving platform to serve large models, you must also install Operators for Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh and perform additional configuration. For more information, see About the single-model serving platform . If you want to add an authorization provider for the single-model serving platform, you must install the Red Hat - Authorino Operator. For information, see Adding an authorization provider for the single-model serving platform . Install model registry dependencies (Technology Preview feature) To use the model registry component, you must also install Operators for Red Hat Authorino, Red Hat OpenShift Serverless, and Red Hat OpenShift Service Mesh. For more information about configuring the model registry component, see Configuring the model registry component . Access to object storage Components of OpenShift AI require or can use S3-compatible object storage such as AWS S3, MinIO, Ceph, or IBM Cloud Storage. An object store is a data storage mechanism that enables users to access their data either as an object or as a file. The S3 API is the recognized standard for HTTP-based access to object storage services. Object storage is required for the following components: Single- or multi-model serving platforms, to deploy stored models. See Deploying models on the single-model serving platform or Deploying a model by using the multi-model serving platform . Data science pipelines, to store artifacts, logs, and intermediate results. See Configuring a pipeline server and About pipeline logs . Object storage can be used by the following components: Workbenches, to access large datasets. See Adding a connection to your data science project . Distributed workloads, to pull input data from and push results to. See Running distributed data science workloads from data science pipelines . Code executed inside a pipeline. For example, to store the resulting model in object storage. See Overview of pipelines in Jupyterlab . 3.2. Adding administrative users in OpenShift Before you can install and configure OpenShift AI for your data scientist users, you must obtain OpenShift cluster administrator ( cluster-admin ) privileges. To assign cluster-admin privileges to a user, follow the steps in the relevant OpenShift documentation: OpenShift Container Platform: Creating a cluster admin OpenShift Dedicated: Managing OpenShift Dedicated administrators ROSA: Creating a cluster administrator user for quick cluster access 3.3. Configuring custom namespaces By default, OpenShift AI uses predefined namespaces, but you can define a custom namespace for the operator and DSCI.applicationNamespace as needed. Namespaces created by OpenShift AI typically include openshift or redhat in their name. Do not rename these system namespaces because they are required for OpenShift AI to function properly. Prerequisites You have access to a OpenShift AI cluster with cluster administrator privileges. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example: Enter the following command to create the custom namespace: If you are creating a namespace for a DSCI.applicationNamespace , enter the following command to add the correct label: 3.4. Installing the Red Hat OpenShift AI Operator This section shows how to install the Red Hat OpenShift AI Operator on your OpenShift cluster using the command-line interface (CLI) and the OpenShift web console. Note If you want to upgrade from a version of OpenShift AI rather than performing a new installation, see Upgrading OpenShift AI . Note If your OpenShift cluster uses a proxy to access the Internet, you can configure the proxy settings for the Red Hat OpenShift AI Operator. See Overriding proxy settings of an Operator for more information. 3.4.1. Installing the Red Hat OpenShift AI Operator by using the CLI The following procedure shows how to use the OpenShift command-line interface (CLI) to install the Red Hat OpenShift AI Operator on your OpenShift cluster. You must install the Operator before you can install OpenShift AI components on the cluster. Prerequisites You have a running OpenShift cluster, version 4.14 or greater, configured with a default storage class that can be dynamically provisioned. You have cluster administrator privileges for your OpenShift cluster. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . Procedure Open a new terminal window. Follow these steps to log in to your OpenShift cluster as a cluster administrator: In the upper-right corner of the OpenShift web console, click your user name and select Copy login command . After you have logged in, click Display token . Copy the Log in with this token command and paste it in the OpenShift command-line interface (CLI). Create a namespace for installation of the Operator by performing the following actions: Create a namespace YAML file named rhods-operator-namespace.yaml . apiVersion: v1 kind: Namespace metadata: name: redhat-ods-operator 1 1 Defines the required redhat-ods-operator namespace for installation of the Operator. Create the namespace in your OpenShift cluster. You see output similar to the following: Create an operator group for installation of the Operator by performing the following actions: Create an OperatorGroup object custom resource (CR) file, for example, rhods-operator-group.yaml . apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: rhods-operator namespace: redhat-ods-operator 1 1 Defines the required redhat-ods-operator namespace. Create the OperatorGroup object in your OpenShift cluster. You see output similar to the following: Create a subscription for installation of the Operator by performing the following actions: Create a Subscription object CR file, for example, rhods-operator-subscription.yaml . apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: rhods-operator namespace: redhat-ods-operator 1 spec: name: rhods-operator channel: <channel> 2 source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: rhods-operator.x.y.z 3 1 Defines the required redhat-ods-operator namespace. 2 Sets the update channel. You must specify a value of fast , stable , stable-x.y eus-x.y , or alpha . For more information, see Understanding update channels . 3 Optional: Sets the operator version. If you do not specify a value, the subscription defaults to the latest operator version. For more information, see the Red Hat OpenShift AI Self-Managed Life Cycle Knowledgebase article. Create the Subscription object in your OpenShift cluster to install the Operator. You see output similar to the following: Verification In the OpenShift web console, click Operators Installed Operators and confirm that the Red Hat OpenShift AI Operator shows one of the following statuses: Installing - installation is in progress; wait for this to change to Succeeded . This might take several minutes. Succeeded - installation is successful. In the web console, click Home Projects and confirm that the following project namespaces are visible and listed as Active : redhat-ods-applications redhat-ods-monitoring redhat-ods-operator Additional resources Installing and managing Red Hat OpenShift AI components Adding users to OpenShift AI user groups . Adding Operators to a cluster 3.4.2. Installing the Red Hat OpenShift AI Operator by using the web console The following procedure shows how to use the OpenShift web console to install the Red Hat OpenShift AI Operator on your cluster. You must install the Operator before you can install OpenShift AI components on the cluster. Prerequisites You have a running OpenShift cluster, version 4.14 or greater, configured with a default storage class that can be dynamically provisioned. You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift web console as a cluster administrator. In the web console, click Operators OperatorHub . On the OperatorHub page, locate the Red Hat OpenShift AI Operator by scrolling through the available Operators or by typing Red Hat OpenShift AI into the Filter by keyword box. Click the Red Hat OpenShift AI tile. The Red Hat OpenShift AI information pane opens. Select a Channel . For information about subscription update channels, see Understanding update channels . Select a Version . Click Install . The Install Operator page opens. Review or change the selected channel and version as needed. For Installation mode , note that the only available value is All namespaces on the cluster (default) . This installation mode makes the Operator available to all namespaces in the cluster. For Installed Namespace , select Operator recommended Namespace: redhat-ods-operator . For Update approval , select one of the following update strategies: Automatic : New updates in the update channel are installed as soon as they become available. Manual : A cluster administrator must approve any new updates before installation begins. Important By default, the Red Hat OpenShift AI Operator follows a sequential update process. This means that if there are several versions between the current version and the target version, Operator Lifecycle Manager (OLM) upgrades the Operator to each of the intermediate versions before it upgrades it to the final, target version. If you configure automatic upgrades, OLM automatically upgrades the Operator to the latest available version. If you configure manual upgrades, a cluster administrator must manually approve each sequential update between the current version and the final, target version. For information about supported versions, see the Red Hat OpenShift AI Life Cycle Knowledgebase article. Click Install . The Installing Operators pane appears. When the installation finishes, a checkmark appears to the Operator name. Verification In the OpenShift web console, click Operators Installed Operators and confirm that the Red Hat OpenShift AI Operator shows one of the following statuses: Installing - installation is in progress; wait for this to change to Succeeded . This might take several minutes. Succeeded - installation is successful. In the web console, click Home Projects and confirm that the following project namespaces are visible and listed as Active : redhat-ods-applications redhat-ods-monitoring redhat-ods-operator Additional resources Installing and managing Red Hat OpenShift AI components Adding users to OpenShift AI user groups Adding Operators to a cluster 3.5. Installing and managing Red Hat OpenShift AI components You can use the OpenShift command-line interface (CLI) or OpenShift web console to install and manage components of Red Hat OpenShift AI on your OpenShift cluster. 3.5.1. Installing Red Hat OpenShift AI components by using the CLI To install Red Hat OpenShift AI components by using the OpenShift command-line interface (CLI), you must create and configure a DataScienceCluster object. Important The following procedure describes how to create and configure a DataScienceCluster object to install Red Hat OpenShift AI components as part of a new installation. For information about changing the installation status of OpenShift AI components after installation, see Updating the installation status of Red Hat OpenShift AI components by using the web console . For information about upgrading OpenShift AI, see Upgrading OpenShift AI Self-Managed . Prerequisites The Red Hat OpenShift AI Operator is installed on your OpenShift cluster. See Installing the Red Hat OpenShift AI Operator . You have cluster administrator privileges for your OpenShift cluster. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . Procedure Open a new terminal window. Follow these steps to log in to your OpenShift cluster as a cluster administrator: In the upper-right corner of the OpenShift web console, click your user name and select Copy login command . After you have logged in, click Display token . Copy the Log in with this token command and paste it in the OpenShift command-line interface (CLI). Create a DataScienceCluster object custom resource (CR) file, for example, rhods-operator-dsc.yaml . apiVersion: datasciencecluster.opendatahub.io/v1 kind: DataScienceCluster metadata: name: default-dsc spec: components: codeflare: managementState: Removed dashboard: managementState: Removed datasciencepipelines: managementState: Removed kserve: managementState: Removed 1 2 kueue: managementState: Removed modelmeshserving: managementState: Removed ray: managementState: Removed trainingoperator: managementState: Removed trustyai: managementState: Removed workbenches: managementState: Removed 1 To fully install the KServe component, which is used by the single-model serving platform to serve large models, you must install Operators for Red Hat OpenShift Service Mesh and Red Hat OpenShift Serverless and perform additional configuration. See Installing the single-model serving platform . 2 If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed ), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies . In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed . These values are defined as follows: Managed The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so. Removed The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it. Important To learn how to fully install the KServe component, which is used by the single-model serving platform to serve large models, see Installing the single-model serving platform . If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed ), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies . To learn how to install the distributed workloads components, see Installing the distributed workloads components . Create the DataScienceCluster object in your OpenShift cluster to install the specified OpenShift AI components. You see output similar to the following: Verification Confirm that there is a running pod for each component: In the OpenShift web console, click Workloads Pods . In the Project list at the top of the page, select redhat-ods-applications . In the applications namespace, confirm that there are running pods for each of the OpenShift AI components that you installed. Confirm the status of all installed components: In the OpenShift web console, click Operators Installed Operators . Click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab and select the DataScienceCluster object called default-dsc . Select the YAML tab. In the installedComponents section, confirm that the components you installed have a status value of true . Note If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed. 3.5.2. Installing Red Hat OpenShift AI components by using the web console To install Red Hat OpenShift AI components by using the OpenShift web console, you must create and configure a DataScienceCluster object. Important The following procedure describes how to create and configure a DataScienceCluster object to install Red Hat OpenShift AI components as part of a new installation. For information about changing the installation status of OpenShift AI components after installation, see Updating the installation status of Red Hat OpenShift AI components by using the web console . For information about upgrading OpenShift AI, see Upgrading OpenShift AI Self-Managed . Prerequisites The Red Hat OpenShift AI Operator is installed on your OpenShift cluster. See Installing the Red Hat OpenShift AI Operator . You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift web console as a cluster administrator. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab. Click Create DataScienceCluster . For Configure via , select YAML view . An embedded YAML editor opens showing a default custom resource (CR) for the DataScienceCluster object, similar to the following example: apiVersion: datasciencecluster.opendatahub.io/v1 kind: DataScienceCluster metadata: name: default-dsc spec: components: codeflare: managementState: Removed dashboard: managementState: Removed datasciencepipelines: managementState: Removed kserve: managementState: Removed 1 2 kueue: managementState: Removed modelmeshserving: managementState: Removed ray: managementState: Removed trainingoperator: managementState: Removed trustyai: managementState: Removed workbenches: managementState: Removed 1 To fully install the KServe component, which is used by the single-model serving platform to serve large models, you must install Operators for Red Hat OpenShift Service Mesh and Red Hat OpenShift Serverless and perform additional configuration. See Installing the single-model serving platform . 2 If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed ), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies . In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed . These values are defined as follows: Managed The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so. Removed The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it. Important To learn how to fully install the KServe component, which is used by the single-model serving platform to serve large models, see Installing the single-model serving platform . If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed ), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies . To learn how to install the distributed workloads components, see Installing the distributed workloads components . Click Create . Verification Confirm that there is a running pod for each component: In the OpenShift web console, click Workloads Pods . In the Project list at the top of the page, select redhat-ods-applications . In the applications namespace, confirm that there are running pods for each of the OpenShift AI components that you installed. Confirm the status of all installed components: In the OpenShift web console, click Operators Installed Operators . Click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab and select the DataScienceCluster object called default-dsc . Select the YAML tab. In the installedComponents section, confirm that the components you installed have a status value of true . Note If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed. 3.5.3. Updating the installation status of Red Hat OpenShift AI components by using the web console You can use the OpenShift web console to update the installation status of components of Red Hat OpenShift AI on your OpenShift cluster. Important If you upgraded OpenShift AI, the upgrade process automatically used the values of the version's DataScienceCluster object. New components are not automatically added to the DataScienceCluster object. After upgrading OpenShift AI: Inspect the default DataScienceCluster object to check and optionally update the managementState status of the existing components. Add any new components to the DataScienceCluster object. Prerequisites The Red Hat OpenShift AI Operator is installed on your OpenShift cluster. You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift web console as a cluster administrator. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab. On the DataScienceClusters page, click the default object. Click the YAML tab. An embedded YAML editor opens showing the default custom resource (CR) for the DataScienceCluster object, similar to the following example: apiVersion: datasciencecluster.opendatahub.io/v1 kind: DataScienceCluster metadata: name: default-dsc spec: components: codeflare: managementState: Removed dashboard: managementState: Removed datasciencepipelines: managementState: Removed kserve: managementState: Removed kueue: managementState: Removed modelmeshserving: managementState: Removed ray: managementState: Removed trainingoperator: managementState: Removed trustyai: managementState: Removed workbenches: managementState: Removed In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed . These values are defined as follows: Managed The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so. Removed The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it. Important To learn how to install the KServe component, which is used by the single-model serving platform to serve large models, see Installing the single-model serving platform . If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed ), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies . To learn how to install the distributed workloads feature, see Installing the distributed workloads components . Click Save . For any components that you updated, OpenShift AI initiates a rollout that affects all pods to use the updated image. Verification Confirm that there is a running pod for each component: In the OpenShift web console, click Workloads Pods . In the Project list at the top of the page, select redhat-ods-applications . In the applications namespace, confirm that there are running pods for each of the OpenShift AI components that you installed. Confirm the status of all installed components: In the OpenShift web console, click Operators Installed Operators . Click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab and select the DataScienceCluster object called default-dsc . Select the YAML tab. In the installedComponents section, confirm that the components you installed have a status value of true . Note If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed.
[ "login <openshift_cluster_url> -u <admin_username> -p <password>", "create namespace <custom_namespace>", "label namespace <application_namespace> opendatahub.io/application-namespace=true", "oc login --token= <token> --server= <openshift_cluster_url>", "apiVersion: v1 kind: Namespace metadata: name: redhat-ods-operator 1", "oc create -f rhods-operator-namespace.yaml", "namespace/redhat-ods-operator created", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: rhods-operator namespace: redhat-ods-operator 1", "oc create -f rhods-operator-group.yaml", "operatorgroup.operators.coreos.com/rhods-operator created", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: rhods-operator namespace: redhat-ods-operator 1 spec: name: rhods-operator channel: <channel> 2 source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: rhods-operator.x.y.z 3", "oc create -f rhods-operator-subscription.yaml", "subscription.operators.coreos.com/rhods-operator created", "oc login --token= <token> --server= <openshift_cluster_url>", "apiVersion: datasciencecluster.opendatahub.io/v1 kind: DataScienceCluster metadata: name: default-dsc spec: components: codeflare: managementState: Removed dashboard: managementState: Removed datasciencepipelines: managementState: Removed kserve: managementState: Removed 1 2 kueue: managementState: Removed modelmeshserving: managementState: Removed ray: managementState: Removed trainingoperator: managementState: Removed trustyai: managementState: Removed workbenches: managementState: Removed", "oc create -f rhods-operator-dsc.yaml", "datasciencecluster.datasciencecluster.opendatahub.io/default created", "apiVersion: datasciencecluster.opendatahub.io/v1 kind: DataScienceCluster metadata: name: default-dsc spec: components: codeflare: managementState: Removed dashboard: managementState: Removed datasciencepipelines: managementState: Removed kserve: managementState: Removed 1 2 kueue: managementState: Removed modelmeshserving: managementState: Removed ray: managementState: Removed trainingoperator: managementState: Removed trustyai: managementState: Removed workbenches: managementState: Removed", "apiVersion: datasciencecluster.opendatahub.io/v1 kind: DataScienceCluster metadata: name: default-dsc spec: components: codeflare: managementState: Removed dashboard: managementState: Removed datasciencepipelines: managementState: Removed kserve: managementState: Removed kueue: managementState: Removed modelmeshserving: managementState: Removed ray: managementState: Removed trainingoperator: managementState: Removed trustyai: managementState: Removed workbenches: managementState: Removed" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed/installing-and-deploying-openshift-ai_install
2.3. Starting the Piranha Configuration Tool Service
2.3. Starting the Piranha Configuration Tool Service After you have set the password for the Piranha Configuration Tool , start or restart the piranha-gui service located in /etc/rc.d/init.d/piranha-gui . To do this, type the following command as root: /sbin/service piranha-gui start or /sbin/service piranha-gui restart Issuing this command starts a private session of the Apache HTTP Server by calling the symbolic link /usr/sbin/piranha_gui -> /usr/sbin/httpd . For security reasons, the piranha-gui version of httpd runs as the piranha user in a separate process. The fact that piranha-gui leverages the httpd service means that: The Apache HTTP Server must be installed on the system. Stopping or restarting the Apache HTTP Server by means of the service command stops the piranha-gui service. Warning If the command /sbin/service httpd stop or /sbin/service httpd restart is issued on an LVS router, you must start the piranha-gui service by issuing the following command: /sbin/service piranha-gui start The piranha-gui service is all that is necessary to begin configuring Load Balancer Add-On. However, if you are configuring Load Balancer Add-On remotely, the sshd service is also required. You do not need to start the pulse service until configuration using the Piranha Configuration Tool is complete. See Section 4.8, "Starting the Load Balancer Add-On" for information on starting the pulse service. 2.3.1. Configuring the Piranha Configuration Tool Web Server Port The Piranha Configuration Tool runs on port 3636 by default. To change this port number, change the line Listen 3636 in Section 2 of the piranha-gui Web server configuration file /etc/sysconfig/ha/conf/httpd.conf . To use the Piranha Configuration Tool you need at minimum a text-only Web browser. If you start a Web browser on the primary LVS router, open the location http:// localhost :3636 . You can reach the Piranha Configuration Tool from anywhere by means of a Web browser by replacing localhost with the host name or IP address of the primary LVS router. When your browser connects to the Piranha Configuration Tool , you must login to access the configuration services. Enter piranha in the Username field and the password set with piranha-passwd in the Password field. Now that the Piranha Configuration Tool is running, you may wish to consider limiting who has access to the tool over the network. The section reviews ways to accomplish this task.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s1-lvs-piranha-service-vsa
13.8. Date & Time
13.8. Date & Time To configure time zone, date, and optionally settings for network time, select Date & Time at the Installation Summary screen. There are three ways for you to select a time zone: Using your mouse, click on the interactive map to select a specific city. A red pin appears indicating your selection. You can also scroll through the Region and City drop-down menus at the top of the screen to select your time zone. Select Etc at the bottom of the Region drop-down menu, then select your time zone in the menu adjusted to GMT/UTC, for example GMT+1 . If your city is not available on the map or in the drop-down menu, select the nearest major city in the same time zone. Alternatively you can use a Kickstart file, which will allow you to specify some additional time zones which are not available in the graphical interface. See the timezone command in timezone (required) for details. Note The list of available cities and regions comes from the Time Zone Database (tzdata) public domain, which is maintained by the Internet Assigned Numbers Authority (IANA). Red Hat cannot add cities or regions into this database. You can find more information at the official website, available at http://www.iana.org/time-zones . Specify a time zone even if you plan to use NTP (Network Time Protocol) to maintain the accuracy of the system clock. If you are connected to the network, the Network Time switch will be enabled. To set the date and time using NTP, leave the Network Time switch in the ON position and click the configuration icon to select which NTP servers Red Hat Enterprise Linux should use. To set the date and time manually, move the switch to the OFF position. The system clock should use your time zone selection to display the correct date and time at the bottom of the screen. If they are still incorrect, adjust them manually. Note that NTP servers might be unavailable at the time of installation. In such a case, enabling them will not set the time automatically. When the servers become available, the date and time will update. Once you have made your selection, click Done to return to the Installation Summary screen. Note To change your time zone configuration after you have completed the installation, visit the Date & Time section of the Settings dialog window.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-date-time-configuration-ppc
6.9. Configuring Global Cluster Resources
6.9. Configuring Global Cluster Resources You can configure two types of resources: Global - Resources that are available to any service in the cluster. Service-specific - Resources that are available to only one service. To see a list of currently configured resources and services in the cluster, execute the following command: To add a global cluster resource, execute the following command. You can add a resource that is local to a particular service when you configure the service, as described in Section 6.10, "Adding a Cluster Service to the Cluster" . For example, the following command adds a global file system resource to the cluster configuration file on node01.example.com . The name of the resource is web_fs , the file system device is /dev/sdd2 , the file system mountpoint is /var/www , and the file system type is ext3 . For information about the available resource types and resource options, see Appendix B, HA Resource Parameters . To remove a global resource, execute the following command: If you need to modify the parameters of an existing global resource, you can remove the resource and configure it again. Note that when you have finished configuring all of the components of your cluster, you will need to sync the cluster configuration file to all of the nodes, as described in Section 6.15, "Propagating the Configuration File to the Cluster Nodes" .
[ "ccs -h host --lsservices", "ccs -h host --addresource resourcetype [resource options]", "ccs -h node01.example.com --addresource fs name=web_fs device=/dev/sdd2 mountpoint=/var/www fstype=ext3", "ccs -h host --rmresource resourcetype [resource options]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-add-resource-ccs-ca
Chapter 10. Understanding and creating service accounts
Chapter 10. Understanding and creating service accounts 10.1. Service accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. For example, service accounts can allow: Replication controllers to make API calls to create or delete pods. Applications inside containers to make API calls for discovery purposes. External applications to make API calls for monitoring or integration purposes. Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. Each service account automatically contains two secrets: An API token Credentials for the OpenShift Container Registry The generated API token and registry credentials do not expire, but you can revoke them by deleting the secret. When you delete the secret, a new one is automatically generated to take its place. 10.2. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none> 10.3. Examples of granting roles to service accounts You can grant roles to service accounts in the same way that you grant roles to a regular user account. You can modify the service accounts for the current project. For example, to add the view role to the robot service account in the top-secret project: USD oc policy add-role-to-user view system:serviceaccount:top-secret:robot Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret You can also grant access to a specific service account in a project. For example, from the project to which the service account belongs, use the -z flag and specify the <service_account_name> USD oc policy add-role-to-user <role_name> -z <service_account_name> Important If you want to grant access to a specific service account in a project, use the -z flag. Using this flag helps prevent typos and ensures that access is granted to only the specified service account. Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name> To modify a different namespace, you can use the -n option to indicate the project namespace it applies to, as shown in the following examples. For example, to allow all service accounts in all projects to view resources in the my-project project: USD oc policy add-role-to-group view system:serviceaccounts -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts To allow all service accounts in the managers project to edit resources in the my-project project: USD oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers
[ "system:serviceaccount:<project>:<name>", "oc get sa", "NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d", "oc create sa <service_account_name> 1", "serviceaccount \"robot\" created", "apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>", "oc describe sa robot", "Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none>", "oc policy add-role-to-user view system:serviceaccount:top-secret:robot", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret", "oc policy add-role-to-user <role_name> -z <service_account_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name>", "oc policy add-role-to-group view system:serviceaccounts -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts", "oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authentication_and_authorization/understanding-and-creating-service-accounts
Managing configurations using Ansible integration
Managing configurations using Ansible integration Red Hat Satellite 6.15 Configure Ansible integration in Satellite and use Ansible roles and playbooks to configure your hosts Red Hat Satellite Documentation Team [email protected]
[ "satellite-installer --enable-foreman-proxy-plugin-ansible", "satellite-installer --enable-foreman-plugin-ansible --enable-foreman-proxy-plugin-ansible", "subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms", "subscription-manager repos --enable=rhel-7-server-extras-rpms", "satellite-maintain packages install rhel-system-roles", "--- collections: - name: my_namespace.my_collection version: 1.2.3", "ansible-vault encrypt /etc/ansible/roles/ Role_Name /vars/main.yml", "chgrp foreman-proxy /etc/ansible/roles/ Role_Name /vars/main.yml chmod 0640 /etc/ansible/roles/ Role_Name /vars/main.yml", "chown foreman-proxy:foreman-proxy /usr/share/foreman-proxy/.ansible_vault_password chmod 0400 /usr/share/foreman-proxy/.ansible_vault_password", "[defaults] vault_password_file = /usr/share/foreman-proxy/.ansible_vault_password", "name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = \"Restart service\" and host_group.name = webservers", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=ssh", "dnf install katello-pull-transport-migrate", "yum install katello-pull-transport-migrate", "systemctl status yggdrasild", "hammer job-template create --file \" Path_to_My_Template_File \" --job-category \" My_Category_Name \" --name \" My_Template_Name \" --provider-type SSH", "curl -X GET -H 'Content-Type: application/json' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/fetch?proxy_id= My_capsule_ID", "curl -X PUT -H 'Content-Type: application/json' -d '{ \"playbook_names\": [\" My_Playbook_Name \"] }' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_capsule_ID", "curl -X PUT -H 'Content-Type: application/json' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_capsule_ID", "hammer settings set --name=remote_execution_fallback_proxy --value=true", "hammer settings set --name=remote_execution_global_proxy --value=true", "mkdir /My_Remote_Working_Directory", "chcon --reference=/tmp /My_Remote_Working_Directory", "satellite-installer --foreman-proxy-plugin-ansible-working-dir /My_Remote_Working_Directory", "ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub [email protected]", "ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy [email protected]", "ssh-keygen -p -f ~foreman-proxy/.ssh/id_rsa_foreman_proxy", "mkdir ~/.ssh", "curl https:// capsule.example.com :9090/ssh/pubkey >> ~/.ssh/authorized_keys", "chmod 700 ~/.ssh", "chmod 600 ~/.ssh/authorized_keys", "<%= snippet 'remote_execution_ssh_keys' %>", "id -u foreman-proxy", "umask 077", "mkdir -p \"/var/kerberos/krb5/user/ My_User_ID \"", "cp My_Client.keytab /var/kerberos/krb5/user/ My_User_ID /client.keytab", "chown -R foreman-proxy:foreman-proxy \"/var/kerberos/krb5/user/ My_User_ID \"", "chmod -wx \"/var/kerberos/krb5/user/ My_User_ID /client.keytab\"", "restorecon -RvF /var/kerberos/krb5", "satellite-installer --foreman-proxy-plugin-remote-execution-script-ssh-kerberos-auth true", "hostgroup_fullname ~ \" My_Host_Group *\"", "hammer settings set --name=remote_execution_global_proxy --value=false", "hammer job-template list", "hammer job-template info --id My_Template_ID", "hammer job-invocation create --inputs My_Key_1 =\" My_Value_1 \", My_Key_2 =\" My_Value_2 \",... --job-template \" My_Template_Name \" --search-query \" My_Search_Query \"", "hammer job-invocation list", "hammer job-invocation output --host My_Host_Name --id My_Job_ID", "hammer job-invocation cancel --id My_Job_ID", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit MAX_JOBS_NUMBER", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit 200", "systemctl start ansible-callback", "systemctl status ansible-callback", "SAT_host systemd[1]: Started Provisioning callback to Ansible Automation Controller", "curl -k -s --data curl --insecure --data host_config_key= my_config_key https:// controller.example.com /api/v2/job_templates/ 8 /callback/", "<%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'nginx' %> <%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'nginx' %>", "<%= render_template 'Package Action - SSH Default', :action => 'install', :package => input(\"package\") %>", "restorecon -RvF <%= input(\"directory\") %>", "<%= render_template(\"Run Command - restorecon\", :directory => \"/home\") %>", "<%= render_template(\"Power Action - SSH Default\", :action => \"restart\") %>" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html-single/managing_configurations_using_ansible_integration/index
Chapter 3. Searches
Chapter 3. Searches 3.1. Performing Searches in Red Hat Virtualization The Administration Portal allows you to manage thousands of resources, such as virtual machines, hosts, users, and more. To perform a search, enter the search query (free-text or syntax-based) into the search bar, available on the main page for each resource. Search queries can be saved as bookmarks for future reuse, so you do not have to reenter a search query each time the specific search results are required. Searches are not case sensitive.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/chap-searches
Chapter 3. Managing access to repositories
Chapter 3. Managing access to repositories As a Red Hat Quay user, you can create your own repositories and make them accessible to other users that are part of your instance. Alternatively, you can create a specific Organization to allow access to repositories based on defined teams. In both User and Organization repositories, you can allow access to those repositories by creating credentials associated with Robot Accounts. Robot Accounts make it easy for a variety of container clients, such as Docker or Podman, to access your repositories without requiring that the client have a Red Hat Quay user account. 3.1. Allowing access to user repositories When you create a repository in a user namespace, you can add access to that repository to user accounts or through Robot Accounts. 3.1.1. Allowing user access to a user repository Use the following procedure to allow access to a repository associated with a user account. Procedure Log into Red Hat Quay with your user account. Select a repository under your user namespace that will be shared across multiple users. Select Settings in the navigation pane. Type the name of the user to which you want to grant access to your repository. As you type, the name should appear. For example: In the permissions box, select one of the following: Read . Allows the user to view and pull from the repository. Write . Allows the user to view the repository, pull images from the repository, or push images to the repository. Admin . Provides the user with all administrative settings to the repository, as well as all Read and Write permissions. Select the Add Permission button. The user now has the assigned permission. Optional. You can remove or change user permissions to the repository by selecting the Options icon, and then selecting Delete Permission . 3.1.2. Allowing robot access to a user repository Robot Accounts are used to set up automated access to the repositories in your Red Hat Quay registry. They are similar to OpenShift Container Platform service accounts. Setting up a Robot Account results in the following: Credentials are generated that are associated with the Robot Account. Repositories and images that the Robot Account can push and pull images from are identified. Generated credentials can be copied and pasted to use with different container clients, such as Docker, Podman, Kubernetes, Mesos, and so on, to access each defined repository. Each Robot Account is limited to a single user namespace or Organization. For example, the Robot Account could provide access to all repositories for the user jsmith . However, it cannot provide access to repositories that are not in the user's list of repositories. Use the following procedure to set up a Robot Account that can allow access to your repositories. Procedure On the Repositories landing page, click the name of a user. Click Robot Accounts on the navigation pane. Click Create Robot Account . Provide a name for your Robot Account. Optional. Provide a description for your Robot Account. Click Create Robot Account . The name of your Robot Account becomes a combination of your username plus the name of the robot, for example, jsmith+robot Select the repositories that you want the Robot Account to be associated with. Set the permissions of the Robot Account to one of the following: None . The Robot Account has no permission to the repository. Read . The Robot Account can view and pull from the repository. Write . The Robot Account can read (pull) from and write (push) to the repository. Admin . Full access to pull from, and push to, the repository, plus the ability to do administrative tasks associated with the repository. Click the Add permissions button to apply the settings. On the Robot Accounts page, select the Robot Account to see credential information for that robot. Under the Robot Account option, copy the generated token for the robot by clicking Copy to Clipboard . To generate a new token, you can click Regenerate Token . Note Regenerating a token makes any tokens for this robot invalid. Obtain the resulting credentials in the following ways: Kubernetes Secret : Select this to download credentials in the form of a Kubernetes pull secret yaml file. rkt Configuration : Select this to download credentials for the rkt container runtime in the form of a .json file. Docker Login : Select this to copy a full docker login command line that includes the credentials. Docker Configuration : Select this to download a file to use as a Docker config.json file, to permanently store the credentials on your client system. Mesos Credentials : Select this to download a tarball that provides the credentials that can be identified in the URI field of a Mesos configuration file. 3.2. Organization repositories After you have created an Organization, you can associate a set of repositories directly to that Organization. An Organization's repository differs from a basic repository in that the Organization is intended to set up shared repositories through groups of users. In Red Hat Quay, groups of users can be either Teams , or sets of users with the same permissions, or individual users . Other useful information about Organizations includes the following: You cannot have an Organization embedded within another Organization. To subdivide an Organization, you use teams. Organizations cannot contain users directly. You must first add a team, and then add one or more users to each team. Note Individual users can be added to specific repositories inside of an organization. Consequently, those users are not members of any team on the Repository Settings page. The Collaborators View on the Teams and Memberships page shows users who have direct access to specific repositories within the organization without needing to be part of that organization specifically. Teams can be set up in Organizations as just members who use the repositories and associated images, or as administrators with special privileges for managing the Organization. 3.2.1. Creating an Organization Use the following procedure to create an Organization. Procedure On the Repositories landing page, click Create New Organization . Under Organization Name , enter a name that is at least 2 characters long, and less than 225 characters long. Under Organization Email , enter an email that is different from your account's email. Click Create Organization to finalize creation. 3.2.1.1. Creating another Organization by using the API You can create another Organization by using the API. To do this, you must have created the first Organization by using the UI. You must also have generated an OAuth Access Token. Use the following procedure to create another Organization by using the Red Hat Quay API endpoint. Prerequisites You have already created at least one Organization by using the UI. You have generated an OAuth Access Token. For more information, see "Creating an OAuth Access Token". Procedure Create a file named data.json by entering the following command: USD touch data.json Add the following content to the file, which will be the name of the new Organization: {"name":"testorg1"} Enter the following command to create the new Organization using the API endpoint, passing in your OAuth Access Token and Red Hat Quay registry endpoint: USD curl -X POST -k -d @data.json -H "Authorization: Bearer <access_token>" -H "Content-Type: application/json" http://<quay-server.example.com>/api/v1/organization/ Example output "Created" 3.2.2. Adding a team to an organization When you create a team for your Organization you can select the team name, choose which repositories to make available to the team, and decide the level of access to the team. Use the following procedure to create a team for your Organization. Prerequisites You have created an organization. Procedure On the Repositories landing page, select an Organization to add teams to. In the navigation pane, select Teams and Membership . By default, an owners team exists with Admin privileges for the user who created the Organization. Click Create New Team . Enter a name for your new team. Note that the team must start with a lowercase letter. It can also only use lowercase letters and numbers. Capital letters or special characters are not allowed. Click Create team . Click the name of your team to be redirected to the Team page. Here, you can add a description of the team, and add team members, like registered users, robots, or email addresses. For more information, see "Adding users to a team". Click the No repositories text to bring up a list of available repositories. Select the box of each repository you will provide the team access to. Select the appropriate permissions that you want the team to have: None . Team members have no permission to the repository. Read . Team members can view and pull from the repository. Write . Team members can read (pull) from and write (push) to the repository. Admin . Full access to pull from, and push to, the repository, plus the ability to do administrative tasks associated with the repository. Click Add permissions to save the repository permissions for the team. 3.2.3. Setting a Team role After you have added a team, you can set the role of that team within the Organization. Prerequisites You have created a team. Procedure On the Repository landing page, click the name of your Organization. In the navigation pane, click Teams and Membership . Select the TEAM ROLE drop-down menu, as shown in the following figure: For the selected team, choose one of the following roles: Member . Inherits all permissions set for the team. Creator . All member permissions, plus the ability to create new repositories. Admin . Full administrative access to the organization, including the ability to create teams, add members, and set permissions. 3.2.4. Adding users to a Team With administrative privileges to an Organization, you can add users and robot accounts to a team. When you add a user, Red Hat Quay sends an email to that user. The user remains pending until they accept the invitation. Use the following procedure to add users or robot accounts to a team. Procedure On the Repository landing page, click the name of your Organization. In the navigation pane, click Teams and Membership . Select the team you want to add users or robot accounts to. In the Team Members box, enter information for one of the following: A username from an account on the registry. The email address for a user account on the registry. The name of a robot account. The name must be in the form of <organization_name>+<robot_name>. Note Robot Accounts are immediately added to the team. For user accounts, an invitation to join is mailed to the user. Until the user accepts that invitation, the user remains in the INVITED TO JOIN state. After the user accepts the email invitation to join the team, they move from the INVITED TO JOIN list to the MEMBERS list for the Organization. Additional resources Creating an OAuth Access Token 3.3. Disabling robot accounts Red Hat Quay administrators can manage robot accounts by disallowing users to create new robot accounts. Important Robot accounts are mandatory for repository mirroring. Setting the ROBOTS_DISALLOW configuration field to true breaks mirroring configurations. Users mirroring repositories should not set ROBOTS_DISALLOW to true in their config.yaml file. This is a known issue and will be fixed in a future release of Red Hat Quay. Use the following procedure to disable robot account creation. Prerequisites You have created multiple robot accounts. Procedure Update your config.yaml field to add the ROBOTS_DISALLOW variable, for example: ROBOTS_DISALLOW: true Restart your Red Hat Quay deployment. Verification: Creating a new robot account Navigate to your Red Hat Quay repository. Click the name of a repository. In the navigation pane, click Robot Accounts . Click Create Robot Account . Enter a name for the robot account, for example, <organization-name/username>+<robot-name> . Click Create robot account to confirm creation. The following message appears: Cannot create robot account. Robot accounts have been disabled. Please contact your administrator. Verification: Logging into a robot account On the command-line interface (CLI), attempt to log in as one of the robot accounts by entering the following command: USD podman login -u="<organization-name/username>+<robot-name>" -p="KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678" <quay-server.example.com> The following error message is returned: Error: logging into "<quay-server.example.com>": invalid username/password You can pass in the log-level=debug flag to confirm that robot accounts have been deactivated: USD podman login -u="<organization-name/username>+<robot-name>" -p="KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678" --log-level=debug <quay-server.example.com> ... DEBU[0000] error logging into "quay-server.example.com": unable to retrieve auth token: invalid username/password: unauthorized: Robot accounts have been disabled. Please contact your administrator.
[ "touch data.json", "{\"name\":\"testorg1\"}", "curl -X POST -k -d @data.json -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" http://<quay-server.example.com>/api/v1/organization/", "\"Created\"", "ROBOTS_DISALLOW: true", "podman login -u=\"<organization-name/username>+<robot-name>\" -p=\"KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678\" <quay-server.example.com>", "Error: logging into \"<quay-server.example.com>\": invalid username/password", "podman login -u=\"<organization-name/username>+<robot-name>\" -p=\"KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678\" --log-level=debug <quay-server.example.com>", "DEBU[0000] error logging into \"quay-server.example.com\": unable to retrieve auth token: invalid username/password: unauthorized: Robot accounts have been disabled. Please contact your administrator." ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/use_red_hat_quay/use-quay-manage-repo
1.4. LVM Logical Volumes in a Red Hat High Availability Cluster
1.4. LVM Logical Volumes in a Red Hat High Availability Cluster The Red Hat High Availability Add-On provides support for LVM volumes in two distinct cluster configurations: High availability LVM volumes (HA-LVM) in an active/passive failover configurations in which only a single node of the cluster accesses the storage at any one time. LVM volumes that use the Clustered Logical Volume (CLVM) extensions in an active/active configurations in which more than one node of the cluster requires access to the storage at the same time. CLVM is part of the Resilient Storage Add-On. 1.4.1. Choosing CLVM or HA-LVM When to use CLVM or HA-LVM should be based on the needs of the applications or services being deployed. If multiple nodes of the cluster require simultaneous read/write access to LVM volumes in an active/active system, then you must use CLVMD. CLVMD provides a system for coordinating activation of and changes to LVM volumes across nodes of a cluster concurrently. CLVMD's clustered-locking service provides protection to LVM metadata as various nodes of the cluster interact with volumes and make changes to their layout. This protection is contingent upon appropriately configuring the volume groups in question, including setting locking_type to 3 in the lvm.conf file and setting the clustered flag on any volume group that will be managed by CLVMD and activated simultaneously across multiple cluster nodes. If the high availability cluster is configured to manage shared resources in an active/passive manner with only one single member needing access to a given LVM volume at a time, then you can use HA-LVM without the CLVMD clustered-locking service Most applications will run better in an active/passive configuration, as they are not designed or optimized to run concurrently with other instances. Choosing to run an application that is not cluster-aware on clustered logical volumes may result in degraded performance if the logical volume is mirrored. This is because there is cluster communication overhead for the logical volumes themselves in these instances. A cluster-aware application must be able to achieve performance gains above the performance losses introduced by cluster file systems and cluster-aware logical volumes. This is achievable for some applications and workloads more easily than others. Determining what the requirements of the cluster are and whether the extra effort toward optimizing for an active/active cluster will pay dividends is the way to choose between the two LVM variants. Most users will achieve the best HA results from using HA-LVM. HA-LVM and CLVM are similar in the fact that they prevent corruption of LVM metadata and its logical volumes, which could otherwise occur if multiple machines are allowed to make overlapping changes. HA-LVM imposes the restriction that a logical volume can only be activated exclusively; that is, active on only one machine at a time. This means that only local (non-clustered) implementations of the storage drivers are used. Avoiding the cluster coordination overhead in this way increases performance. CLVM does not impose these restrictions and a user is free to activate a logical volume on all machines in a cluster; this forces the use of cluster-aware storage drivers, which allow for cluster-aware file systems and applications to be put on top. 1.4.2. Configuring LVM volumes in a cluster In Red Hat Enterprise Linux 7, clusters are managed through Pacemaker. Both HA-LVM and CLVM logical volumes are supported only in conjunction with Pacemaker clusters, and must be configured as cluster resources. For a procedure for configuring an HA-LVM volume as part of a Pacemaker cluster, see An active/passive Apache HTTP Server in a Red Hat High Availability Cluster in High Availability Add-On Administration . Note that this procedure includes the following steps: Configuring an LVM logical volume Ensuring that only the cluster is capable of activating the volume group Configuring the LVM volume as a cluster resource For a procedure for configuring a CLVM volume in a cluster, see Configuring a GFS2 File System in a Cluster in Global File System 2 .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/LVM_Cluster_Overview
Chapter 13. Configuring distributed virtual routing (DVR)
Chapter 13. Configuring distributed virtual routing (DVR) 13.1. Understanding distributed virtual routing (DVR) When you deploy Red Hat OpenStack Platform you can choose between a centralized routing model or DVR. Each model has advantages and disadvantages. Use this document to carefully plan whether centralized routing or DVR better suits your needs. New default RHOSP deployments use DVR and the Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN). DVR is disabled by default in ML2/OVS deployments. 13.1.1. Overview of Layer 3 routing The Red Hat OpenStack Platform Networking service (neutron) provides routing services for project networks. Without a router, VM instances in a project network can communicate with other instances over a shared L2 broadcast domain. Creating a router and assigning it to a project network allows the instances in that network to communicate with other project networks or upstream (if an external gateway is defined for the router). 13.1.2. Routing flows Routing services in Red Hat OpenStack Platform (RHOSP) can be categorized into three main flows: East-West routing - routing of traffic between different networks in the same project. This traffic does not leave the RHOSP deployment. This definition applies to both IPv4 and IPv6 subnets. North-South routing with floating IPs - Floating IP addressing is a one-to-one network address translation (NAT) that can be modified and that floats between VM instances. While floating IPs are modeled as a one-to-one association between the floating IP and a Networking service (neutron) port, they are implemented by association with a Networking service router that performs the NAT translation. The floating IPs themselves are taken from the uplink network that provides the router with external connectivity. As a result, instances can communicate with external resources (such as endpoints on the internet) or the other way around. Floating IPs are an IPv4 concept and do not apply to IPv6. It is assumed that the IPv6 addressing used by projects uses Global Unicast Addresses (GUAs) with no overlap across the projects, and therefore can be routed without NAT. North-South routing without floating IPs (also known as SNAT ) - The Networking service offers a default port address translation (PAT) service for instances that do not have allocated floating IPs. With this service, instances can communicate with external endpoints through the router, but not the other way around. For example, an instance can browse a website on the internet, but a web browser outside cannot browse a website hosted within the instance. SNAT is applied for IPv4 traffic only. In addition, Networking service networks that are assigned GUAs prefixes do not require NAT on the Networking service router external gateway port to access the outside world. 13.1.3. Centralized routing Originally, the Networking service (neutron) was designed with a centralized routing model where a project's virtual routers, managed by the neutron L3 agent, are all deployed in a dedicated node or cluster of nodes (referred to as the Network node, or Controller node). This means that each time a routing function is required (east/west, floating IPs or SNAT), traffic would traverse through a dedicated node in the topology. This introduced multiple challenges and resulted in sub-optimal traffic flows. For example: Traffic between instances flows through a Controller node - when two instances need to communicate with each other using L3, traffic has to hit the Controller node. Even if the instances are scheduled on the same Compute node, traffic still has to leave the Compute node, flow through the Controller, and route back to the Compute node. This negatively impacts performance. Instances with floating IPs receive and send packets through the Controller node - the external network gateway interface is available only at the Controller node, so whether the traffic is originating from an instance, or destined to an instance from the external network, it has to flow through the Controller node. Consequently, in large environments the Controller node is subject to heavy traffic load. This would affect performance and scalability, and also requires careful planning to accommodate enough bandwidth in the external network gateway interface. The same requirement applies for SNAT traffic. To better scale the L3 agent, the Networking service can use the L3 HA feature, which distributes the virtual routers across multiple nodes. In the event that a Controller node is lost, the HA router will failover to a standby on another node and there will be packet loss until the HA router failover completes. 13.2. DVR overview Distributed Virtual Routing (DVR) offers an alternative routing design. DVR isolates the failure domain of the Controller node and optimizes network traffic by deploying the L3 agent and schedule routers on every Compute node. DVR has these characteristics: East-West traffic is routed directly on the Compute nodes in a distributed fashion. North-South traffic with floating IP is distributed and routed on the Compute nodes. This requires the external network to be connected to every Compute node. North-South traffic without floating IP is not distributed and still requires a dedicated Controller node. The L3 agent on the Controller node uses the dvr_snat mode so that the node serves only SNAT traffic. The neutron metadata agent is distributed and deployed on all Compute nodes. The metadata proxy service is hosted on all the distributed routers. 13.3. DVR known issues and caveats Support for DVR is limited to the ML2 core plug-in and the Open vSwitch (OVS) mechanism driver or ML2/OVN mechanism driver. Other back ends are not supported. On ML2/OVS DVR deployments, network traffic for the Red Hat OpenStack Platform Load-balancing service (octavia) goes through the Controller and network nodes, instead of the compute nodes. With an ML2/OVS mechanism driver network back end and DVR, it is possible to create VIPs. However, the IP address assigned to a bound port using allowed_address_pairs , should match the virtual port IP address (/32). If you use a CIDR format IP address for the bound port allowed_address_pairs instead, port forwarding is not configured in the back end, and traffic fails for any IP in the CIDR expecting to reach the bound IP port. SNAT (source network address translation) traffic is not distributed, even when DVR is enabled. SNAT does work, but all ingress/egress traffic must traverse through the centralized Controller node. In ML2/OVS deployments, IPv6 traffic is not distributed, even when DVR is enabled. All ingress/egress traffic goes through the centralized Controller node. If you use IPv6 routing extensively with ML2/OVS, do not use DVR. Note that in ML2/OVN deployments, all east/west traffic is always distributed, and north/south traffic is distributed when DVR is configured. In ML2/OVS deployments, DVR is not supported in conjunction with L3 HA. If you use DVR with Red Hat OpenStack Platform 17.0 director, L3 HA is disabled. This means that routers are still scheduled on the Network nodes (and load-shared between the L3 agents), but if one agent fails, all routers hosted by this agent fail as well. This affects only SNAT traffic. The allow_automatic_l3agent_failover feature is recommended in such cases, so that if one network node fails, the routers are rescheduled to a different node. DHCP servers, which are managed by the neutron DHCP agent, are not distributed and are still deployed on the Controller node. The DHCP agent is deployed in a highly available configuration on the Controller nodes, regardless of the routing design (centralized or DVR). Compute nodes require an interface on the external network attached to an external bridge. They use this interface to attach to a VLAN or flat network for an external router gateway, to host floating IPs, and to perform SNAT for VMs that use floating IPs. In ML2/OVS deployments, each Compute node requires one additional IP address. This is due to the implementation of the external gateway port and the floating IP network namespace. VLAN, GRE, and VXLAN are all supported for project data separation. When you use GRE or VXLAN, you must enable the L2 Population feature. The Red Hat OpenStack Platform director enforces L2 Population during installation. 13.4. Supported routing architectures Red Hat OpenStack Platform (RHOSP) supports both centralized, high-availability (HA) routing and distributed virtual routing (DVR) in the RHOSP versions listed: RHOSP centralized HA routing support began in RHOSP 8. RHOSP distributed routing support began in RHOSP 12. 13.5. Migrating centralized routers to distributed routing This section contains information about upgrading to distributed routing for Red Hat OpenStack Platform deployments that use L3 HA centralized routing. Procedure Upgrade your deployment and validate that it is working correctly. Run the director stack update to configure DVR. Confirm that routing functions correctly through the existing routers. You cannot transition an L3 HA router to distributed directly. Instead, for each router, disable the L3 HA option, and then enable the distributed option: Disable the router: Example Clear high availability: Example Configure the router to use DVR: Example Enable the router: Example Confirm that distributed routing functions correctly. Additional resources Deploying DVR with ML2 OVS 13.6. Deploying ML2/OVN OpenStack with distributed virtual routing (DVR) disabled New Red Hat OpenStack Platform (RHOSP) deployments default to the neutron Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN) and DVR. In a DVR topology, compute nodes with floating IP addresses route traffic between virtual machine instances and the network that provides the router with external connectivity (north-south traffic). Traffic between instances (east-west traffic) is also distributed. You can optionally deploy with DVR disabled. This disables north-south DVR, requiring north-south traffic to traverse a controller or networker node. East-west routing is always distributed in an an ML2/OVN deployment, even when DVR is disabled. Prerequisites RHOSP 17.0 distribution ready for customization and deployment. Procedure Create a custom environment file, and add the following configuration: To apply this configuration, deploy the overcloud, adding your custom environment file to the stack along with your other environment files. For example: 13.6.1. Additional resources Understanding distributed virtual routing (DVR) in the Networking Guide .
[ "openstack router set --disable router1", "openstack router set --no-ha router1", "openstack router set --distributed router1", "openstack router set --enable router1", "parameter_defaults: NeutronEnableDVR: false", "(undercloud) USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<custom-environment-file>.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/networking_guide/config-dvr_rhosp-network
Chapter 4. AD administration rights
Chapter 4. AD administration rights When you want to establish a trust between AD (Active Directory) and IdM (Identity Management), you must use an AD administrator account with appropriate AD privileges. The AD administrator must belong to one of the following groups: Enterprise Admin group in the AD forest Domain Admins group in the forest root domain for your AD forest Additional resources Enterprise Admins Domain Admins How Domain and Forest Trusts Work
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_trust_between_idm_and_ad/ad-administration-rights_installing-trust-between-idm-and-ad
Chapter 3. Using system-wide cryptographic policies
Chapter 3. Using system-wide cryptographic policies The system-wide cryptographic policies is a system component that configures the core cryptographic subsystems, covering the TLS, IPsec, SSH, DNSSec, and Kerberos protocols. It provides a small set of policies, which the administrator can select. 3.1. System-wide cryptographic policies When a system-wide policy is set up, applications in RHEL follow it and refuse to use algorithms and protocols that do not meet the policy, unless you explicitly request the application to do so. That is, the policy applies to the default behavior of applications when running with the system-provided configuration but you can override it if required. RHEL 9 contains the following predefined policies: DEFAULT The default system-wide cryptographic policy level offers secure settings for current threat models. It allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and Diffie-Hellman parameters are accepted if they are at least 2048 bits long. LEGACY Ensures maximum compatibility with Red Hat Enterprise Linux 6 and earlier; it is less secure due to an increased attack surface. SHA-1 is allowed to be used as TLS hash, signature, and algorithm. CBC-mode ciphers are allowed to be used with SSH. Applications using GnuTLS allow certificates signed with SHA-1. It allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and Diffie-Hellman parameters are accepted if they are at least 2048 bits long. FUTURE A stricter forward-looking security level intended for testing a possible future policy. This policy does not allow the use of SHA-1 in DNSSec or as an HMAC. SHA2-224 and SHA3-224 hashes are rejected. 128-bit ciphers are disabled. CBC-mode ciphers are disabled except in Kerberos. It allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and Diffie-Hellman parameters are accepted if they are at least 3072 bits long. If your system communicates on the public internet, you might face interoperability problems. Important Because a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the FUTURE system-wide cryptographic policy, the redhat-support-tool utility does not work with this policy level at the moment. To work around this problem, use the DEFAULT cryptographic policy while connecting to the Customer Portal API. FIPS Conforms with the FIPS 140 requirements. The fips-mode-setup tool, which switches the RHEL system into FIPS mode, uses this policy internally. Switching to the FIPS policy does not guarantee compliance with the FIPS 140 standard. You also must re-generate all cryptographic keys after you set the system to FIPS mode. This is not possible in many scenarios. RHEL also provides the FIPS:OSPP system-wide subpolicy, which contains further restrictions for cryptographic algorithms required by the Common Criteria (CC) certification. The system becomes less interoperable after you set this subpolicy. For example, you cannot use RSA and DH keys shorter than 3072 bits, additional SSH algorithms, and several TLS groups. Setting FIPS:OSPP also prevents connecting to Red Hat Content Delivery Network (CDN) structure. Furthermore, you cannot integrate Active Directory (AD) into the IdM deployments that use FIPS:OSPP , communication between RHEL hosts using FIPS:OSPP and AD domains might not work, or some AD accounts might not be able to authenticate. Note Your system is not CC-compliant after you set the FIPS:OSPP cryptographic subpolicy. The only correct way to make your RHEL system compliant with the CC standard is by following the guidance provided in the cc-config package. See Common Criteria section on the Product compliance Red Hat Customer Portal page for a list of certified RHEL versions, validation reports, and links to CC guides. Red Hat continuously adjusts all policy levels so that all libraries provide secure defaults, except when using the LEGACY policy. Even though the LEGACY profile does not provide secure defaults, it does not include any algorithms that are easily exploitable. As such, the set of enabled algorithms or acceptable key sizes in any provided policy may change during the lifetime of Red Hat Enterprise Linux. Such changes reflect new security standards and new security research. If you must ensure interoperability with a specific system for the whole lifetime of Red Hat Enterprise Linux, you should opt-out from the system-wide cryptographic policies for components that interact with that system or re-enable specific algorithms using custom cryptographic policies. The specific algorithms and ciphers described as allowed in the policy levels are available only if an application supports them: Table 3.1. Cipher suites and protocols enabled in the cryptographic policies LEGACY DEFAULT FIPS FUTURE IKEv1 no no no no 3DES no no no no RC4 no no no no DH min. 2048-bit min. 2048-bit min. 2048-bit min. 3072-bit RSA min. 2048-bit min. 2048-bit min. 2048-bit min. 3072-bit DSA no no no no TLS v1.1 and older no no no no TLS v1.2 and newer yes yes yes yes SHA-1 in digital signatures and certificates yes no no no CBC mode ciphers yes no [a] no [b] no [c] Symmetric ciphers with keys < 256 bits yes yes yes no [a] CBC ciphers are disabled for SSH [b] CBC ciphers are disabled for all protocols except Kerberos [c] CBC ciphers are disabled for all protocols except Kerberos Additional resources crypto-policies(7) and update-crypto-policies(8) man pages on your system Product compliance (Red Hat Customer Portal) 3.2. Changing the system-wide cryptographic policy You can change the system-wide cryptographic policy on your system by using the update-crypto-policies tool and restarting your system. Prerequisites You have root privileges on the system. Procedure Optional: Display the current cryptographic policy: Set the new cryptographic policy: Replace <POLICY> with the policy or subpolicy you want to set, for example FUTURE , LEGACY or FIPS:OSPP . Restart the system: Verification Display the current cryptographic policy: Additional resources For more information on system-wide cryptographic policies, see System-wide cryptographic policies 3.3. Switching the system-wide cryptographic policy to mode compatible with earlier releases The default system-wide cryptographic policy in Red Hat Enterprise Linux 9 does not allow communication using older, insecure protocols. For environments that require to be compatible with Red Hat Enterprise Linux 6 and in some cases also with earlier releases, the less secure LEGACY policy level is available. Warning Switching to the LEGACY policy level results in a less secure system and applications. Procedure To switch the system-wide cryptographic policy to the LEGACY level, enter the following command as root : Additional resources For the list of available cryptographic policy levels, see the update-crypto-policies(8) man page on your system. For defining custom cryptographic policies, see the Custom Policies section in the update-crypto-policies(8) man page and the Crypto Policy Definition Format section in the crypto-policies(7) man page on your system. 3.4. Re-enabling SHA-1 The use of the SHA-1 algorithm for creating and verifying signatures is restricted in the DEFAULT cryptographic policy. If your scenario requires the use of SHA-1 for verifying existing or third-party cryptographic signatures, you can enable it by applying the SHA1 subpolicy, which RHEL 9 provides by default. Note that it weakens the security of the system. Prerequisites The system uses the DEFAULT system-wide cryptographic policy. Procedure Apply the SHA1 subpolicy to the DEFAULT cryptographic policy: Restart the system: Verification Display the current cryptographic policy: Important Switching to the LEGACY cryptographic policy by using the update-crypto-policies --set LEGACY command also enables SHA-1 for signatures. However, the LEGACY cryptographic policy makes your system much more vulnerable by also enabling other weak cryptographic algorithms. Use this workaround only for scenarios that require the enablement of other legacy cryptographic algorithms than SHA-1 signatures. Additional resources SSH from RHEL 9 to RHEL 6 systems does not work (Red Hat Knowledgebase) Packages signed with SHA-1 cannot be installed or upgraded (Red Hat Knowledgebase) 3.5. Setting up system-wide cryptographic policies in the web console You can set one of system-wide cryptographic policies and subpolicies directly in the RHEL web console interface. Besides the four predefined system-wide cryptographic policies, you can also apply the following combinations of policies and subpolicies through the graphical interface now: DEFAULT:SHA1 The DEFAULT policy with the SHA-1 algorithm enabled. LEGACY:AD-SUPPORT The LEGACY policy with less secure settings that improve interoperability for Active Directory services. FIPS:OSPP The FIPS policy with further restrictions required by the Common Criteria for Information Technology Security Evaluation standard. Warning Because the FIPS:OSPP system-wide subpolicy contains further restrictions for cryptographic algorithms required by the Common Criteria (CC) certification, the system is less interoperable after you set it. For example, you cannot use RSA and DH keys shorter than 3072 bits, additional SSH algorithms, and several TLS groups. Setting FIPS:OSPP also prevents connecting to Red Hat Content Delivery Network (CDN) structure. Furthermore, you cannot integrate Active Directory (AD) into the IdM deployments that use FIPS:OSPP , communication between RHEL hosts using FIPS:OSPP and AD domains might not work, or some AD accounts might not be able to authenticate. Note that your system is not CC-compliant after you set the FIPS:OSPP cryptographic subpolicy. The only correct way to make your RHEL system compliant with the CC standard is by following the guidance provided in the cc-config package. See the Common Criteria section on the Product compliance Red Hat Customer Portal page for a list of certified RHEL versions, validation reports, and links to CC guides hosted at the National Information Assurance Partnership (NIAP) website. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . You have root privileges or permissions to enter administrative commands with sudo . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the Configuration card of the Overview page, click your current policy value to Crypto policy . In the Change crypto policy dialog window, click on the policy you want to start using on your system. Click the Apply and reboot button. Verification After the restart, log back in to web console, and check that the Crypto policy value corresponds to the one you selected. Alternatively, you can enter the update-crypto-policies --show command to display the current system-wide cryptographic policy in your terminal. 3.6. Excluding an application from following system-wide cryptographic policies You can customize cryptographic settings used by your application preferably by configuring supported cipher suites and protocols directly in the application. You can also remove a symlink related to your application from the /etc/crypto-policies/back-ends directory and replace it with your customized cryptographic settings. This configuration prevents the use of system-wide cryptographic policies for applications that use the excluded back end. Furthermore, this modification is not supported by Red Hat. 3.6.1. Examples of opting out of the system-wide cryptographic policies wget To customize cryptographic settings used by the wget network downloader, use --secure-protocol and --ciphers options. For example: See the HTTPS (SSL/TLS) Options section of the wget(1) man page for more information. curl To specify ciphers used by the curl tool, use the --ciphers option and provide a colon-separated list of ciphers as a value. For example: See the curl(1) man page for more information. Firefox Even though you cannot opt out of system-wide cryptographic policies in the Firefox web browser, you can further restrict supported ciphers and TLS versions in Firefox's Configuration Editor. Type about:config in the address bar and change the value of the security.tls.version.min option as required. Setting security.tls.version.min to 1 allows TLS 1.0 as the minimum required, security.tls.version.min 2 enables TLS 1.1, and so on. OpenSSH To opt out of the system-wide cryptographic policies for your OpenSSH server, specify the cryptographic policy in a drop-in configuration file located in the /etc/ssh/sshd_config.d/ directory, with a two-digit number prefix smaller than 50, so that it lexicographically precedes the 50-redhat.conf file, and with a .conf suffix, for example, 49-crypto-policy-override.conf . See the sshd_config(5) man page for more information. To opt out of system-wide cryptographic policies for your OpenSSH client, perform one of the following tasks: For a given user, override the global ssh_config with a user-specific configuration in the ~/.ssh/config file. For the entire system, specify the cryptographic policy in a drop-in configuration file located in the /etc/ssh/ssh_config.d/ directory, with a two-digit number prefix smaller than 50, so that it lexicographically precedes the 50-redhat.conf file, and with a .conf suffix, for example, 49-crypto-policy-override.conf . See the ssh_config(5) man page for more information. Libreswan See the Configuring IPsec connections that opt out of the system-wide crypto policies in the Securing networks document for detailed information. Additional resources update-crypto-policies(8) man page on your system 3.7. Customizing system-wide cryptographic policies with subpolicies Use this procedure to adjust the set of enabled cryptographic algorithms or protocols. You can either apply custom subpolicies on top of an existing system-wide cryptographic policy or define such a policy from scratch. The concept of scoped policies allows enabling different sets of algorithms for different back ends. You can limit each configuration directive to specific protocols, libraries, or services. Furthermore, directives can use asterisks for specifying multiple values using wildcards. The /etc/crypto-policies/state/CURRENT.pol file lists all settings in the currently applied system-wide cryptographic policy after wildcard expansion. To make your cryptographic policy more strict, consider using values listed in the /usr/share/crypto-policies/policies/FUTURE.pol file. You can find example subpolicies in the /usr/share/crypto-policies/policies/modules/ directory. The subpolicy files in this directory contain also descriptions in lines that are commented out. Procedure Checkout to the /etc/crypto-policies/policies/modules/ directory: Create subpolicies for your adjustments, for example: Important Use upper-case letters in file names of policy modules. Open the policy modules in a text editor of your choice and insert options that modify the system-wide cryptographic policy, for example: Save the changes in the module files. Apply your policy adjustments to the DEFAULT system-wide cryptographic policy level: To make your cryptographic settings effective for already running services and applications, restart the system: Verification Check that the /etc/crypto-policies/state/CURRENT.pol file contains your changes, for example: Additional resources Custom Policies section in the update-crypto-policies(8) man page on your system Crypto Policy Definition Format section in the crypto-policies(7) man page on your system How to customize crypto policies in RHEL 8.2 Red Hat blog article 3.8. Creating and setting a custom system-wide cryptographic policy For specific scenarios, you can customize the system-wide cryptographic policy by creating and using a complete policy file. Procedure Create a policy file for your customizations: Alternatively, start by copying one of the four predefined policy levels: Edit the file with your custom cryptographic policy in a text editor of your choice to fit your requirements, for example: Switch the system-wide cryptographic policy to your custom level: To make your cryptographic settings effective for already running services and applications, restart the system: Additional resources Custom Policies section in the update-crypto-policies(8) man page and the Crypto Policy Definition Format section in the crypto-policies(7) man page on your system How to customize crypto policies in RHEL Red Hat blog article 3.9. Enhancing security with the FUTURE cryptographic policy using the crypto_policies RHEL system role You can use the crypto_policies RHEL system role to configure the FUTURE policy on your managed nodes. This policy helps to achieve for example: Future-proofing against emerging threats: anticipates advancements in computational power. Enhanced security: stronger encryption standards require longer key lengths and more secure algorithms. Compliance with high-security standards: for example in healthcare, telco, and finance the data sensitivity is high, and availability of strong cryptography is critical. Typically, FUTURE is suitable for environments handling highly sensitive data, preparing for future regulations, or adopting long-term security strategies. Warning Legacy systems or software does not have to support the more modern and stricter algorithms and protocols enforced by the FUTURE policy. For example, older systems might not support TLS 1.3 or larger key sizes. This could lead to compatibility problems. Also, using strong algorithms usually increases the computational workload, which could negatively affect your system performance. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure cryptographic policies hosts: managed-node-01.example.com tasks: - name: Configure the FUTURE cryptographic security policy on the managed node ansible.builtin.include_role: name: rhel-system-roles.crypto_policies vars: - crypto_policies_policy: FUTURE - crypto_policies_reboot_ok: true The settings specified in the example playbook include the following: crypto_policies_policy: FUTURE Configures the required cryptographic policy ( FUTURE ) on the managed node. It can be either the base policy or a base policy with some sub-policies. The specified base policy and sub-policies have to be available on the managed node. The default value is null . It means that the configuration is not changed and the crypto_policies RHEL system role will only collect the Ansible facts. crypto_policies_reboot_ok: true Causes the system to reboot after the cryptographic policy change to make sure all of the services and applications will read the new configuration files. The default value is false . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.crypto_policies/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Warning Because the FIPS:OSPP system-wide subpolicy contains further restrictions for cryptographic algorithms required by the Common Criteria (CC) certification, the system is less interoperable after you set it. For example, you cannot use RSA and DH keys shorter than 3072 bits, additional SSH algorithms, and several TLS groups. Setting FIPS:OSPP also prevents connecting to Red Hat Content Delivery Network (CDN) structure. Furthermore, you cannot integrate Active Directory (AD) into the IdM deployments that use FIPS:OSPP , communication between RHEL hosts using FIPS:OSPP and AD domains might not work, or some AD accounts might not be able to authenticate. Note that your system is not CC-compliant after you set the FIPS:OSPP cryptographic subpolicy. The only correct way to make your RHEL system compliant with the CC standard is by following the guidance provided in the cc-config package. See the Common Criteria section on the Product compliance Red Hat Customer Portal page for a list of certified RHEL versions, validation reports, and links to CC guides hosted at the National Information Assurance Partnership (NIAP) website. Verification On the control node, create another playbook named, for example, verify_playbook.yml : --- - name: Verification hosts: managed-node-01.example.com tasks: - name: Verify active cryptographic policy ansible.builtin.include_role: name: rhel-system-roles.crypto_policies - name: Display the currently active cryptographic policy ansible.builtin.debug: var: crypto_policies_active The settings specified in the example playbook include the following: crypto_policies_active An exported Ansible fact that contains the currently active policy name in the format as accepted by the crypto_policies_policy variable. Validate the playbook syntax: Run the playbook: The crypto_policies_active variable shows the active policy on the managed node. Additional resources /usr/share/ansible/roles/rhel-system-roles.crypto_policies/README.md file /usr/share/doc/rhel-system-roles/crypto_policies/ directory update-crypto-policies(8) and crypto-policies(7) manual pages
[ "update-crypto-policies --show DEFAULT", "update-crypto-policies --set <POLICY> <POLICY>", "reboot", "update-crypto-policies --show <POLICY>", "update-crypto-policies --set LEGACY Setting system policy to LEGACY", "update-crypto-policies --set DEFAULT:SHA1 Setting system policy to DEFAULT:SHA1 Note: System-wide crypto policies are applied on application start-up. It is recommended to restart the system for the change of policies to fully take place.", "reboot", "update-crypto-policies --show DEFAULT:SHA1", "wget --secure-protocol= TLSv1_1 --ciphers=\" SECURE128 \" https://example.com", "curl https://example.com --ciphers '@SECLEVEL=0:DES-CBC3-SHA:RSA-DES-CBC3-SHA'", "cd /etc/crypto-policies/policies/modules/", "touch MYCRYPTO-1 .pmod touch SCOPES-AND-WILDCARDS .pmod", "vi MYCRYPTO-1 .pmod", "min_rsa_size = 3072 hash = SHA2-384 SHA2-512 SHA3-384 SHA3-512", "vi SCOPES-AND-WILDCARDS .pmod", "Disable the AES-128 cipher, all modes cipher = -AES-128-* Disable CHACHA20-POLY1305 for the TLS protocol (OpenSSL, GnuTLS, NSS, and OpenJDK) cipher@TLS = -CHACHA20-POLY1305 Allow using the FFDHE-1024 group with the SSH protocol (libssh and OpenSSH) group@SSH = FFDHE-1024+ Disable all CBC mode ciphers for the SSH protocol (libssh and OpenSSH) cipher@SSH = -*-CBC Allow the AES-256-CBC cipher in applications using libssh cipher@libssh = AES-256-CBC+", "update-crypto-policies --set DEFAULT: MYCRYPTO-1 : SCOPES-AND-WILDCARDS", "reboot", "cat /etc/crypto-policies/state/CURRENT.pol | grep rsa_size min_rsa_size = 3072", "cd /etc/crypto-policies/policies/ touch MYPOLICY .pol", "cp /usr/share/crypto-policies/policies/ DEFAULT .pol /etc/crypto-policies/policies/ MYPOLICY .pol", "vi /etc/crypto-policies/policies/ MYPOLICY .pol", "update-crypto-policies --set MYPOLICY", "reboot", "--- - name: Configure cryptographic policies hosts: managed-node-01.example.com tasks: - name: Configure the FUTURE cryptographic security policy on the managed node ansible.builtin.include_role: name: rhel-system-roles.crypto_policies vars: - crypto_policies_policy: FUTURE - crypto_policies_reboot_ok: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Verification hosts: managed-node-01.example.com tasks: - name: Verify active cryptographic policy ansible.builtin.include_role: name: rhel-system-roles.crypto_policies - name: Display the currently active cryptographic policy ansible.builtin.debug: var: crypto_policies_active", "ansible-playbook --syntax-check ~/verify_playbook.yml", "ansible-playbook ~/verify_playbook.yml TASK [debug] ************************** ok: [host] => { \"crypto_policies_active\": \"FUTURE\" }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/security_hardening/using-the-system-wide-cryptographic-policies_security-hardening
Chapter 8. Deprecated functionality
Chapter 8. Deprecated functionality This part provides an overview of functionality that has been deprecated in Red Hat Enterprise Linux 8. Deprecated devices are fully supported, which means that they are tested and maintained, and their support status remains unchanged within Red Hat Enterprise Linux 8. However, these devices will likely not be supported in the major version release, and are not recommended for new deployments on the current or future major versions of RHEL. For the most recent list of deprecated functionality within a particular major release, see the latest version of release documentation. For information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from the product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. For information regarding functionality that is present in RHEL 7 but has been removed in RHEL 8, see Considerations in adopting RHEL 8 . For information regarding functionality that is present in RHEL 8 but has been removed in RHEL 9, see Considerations in adopting RHEL 9 . 8.1. Installer and image creation Several Kickstart commands and options have been deprecated Using the following commands and options in RHEL 8 Kickstart files will print a warning in the logs: auth or authconfig device deviceprobe dmraid install lilo lilocheck mouse multipath bootloader --upgrade ignoredisk --interactive partition --active reboot --kexec Where only specific options are listed, the base command and its other options are still available and not deprecated. For more details and related changes in Kickstart, see the Kickstart changes section of the Considerations in adopting RHEL 8 document. Bugzilla:1642765 [1] The --interactive option of the ignoredisk Kickstart command has been deprecated Using the --interactive option in future releases of Red Hat Enterprise Linux will result in a fatal installation error. It is recommended that you modify your Kickstart file to remove the option. Bugzilla:1637872 [1] The Kickstart autostep command has been deprecated The autostep command has been deprecated. The related section about this command has been removed from the RHEL 8 documentation . Bugzilla:1904251 [1] 8.2. Security NSS SEED ciphers are deprecated The Mozilla Network Security Services ( NSS ) library will not support TLS cipher suites that use a SEED cipher in a future release. To ensure smooth transition of deployments that rely on SEED ciphers when NSS removes support, Red Hat recommends enabling support for other cipher suites. Note that SEED ciphers are already disabled by default in RHEL. Bugzilla:1817533 TLS 1.0 and TLS 1.1 are deprecated The TLS 1.0 and TLS 1.1 protocols are disabled in the DEFAULT system-wide cryptographic policy level. If your scenario, for example, a video conferencing application in the Firefox web browser, requires using the deprecated protocols, switch the system-wide cryptographic policy to the LEGACY level: For more information, see the Strong crypto defaults in RHEL 8 and deprecation of weak crypto algorithms Knowledgebase article on the Red Hat Customer Portal and the update-crypto-policies(8) man page. Bugzilla:1660839 DSA is deprecated in RHEL 8 The Digital Signature Algorithm (DSA) is considered deprecated in Red Hat Enterprise Linux 8. Authentication mechanisms that depend on DSA keys do not work in the default configuration. Note that OpenSSH clients do not accept DSA host keys even in the LEGACY system-wide cryptographic policy level. Bugzilla:1646541 [1] fapolicyd.rules is deprecated The /etc/fapolicyd/rules.d/ directory for files containing allow and deny execution rules replaces the /etc/fapolicyd/fapolicyd.rules file. The fagenrules script now merges all component rule files in this directory to the /etc/fapolicyd/compiled.rules file. Rules in /etc/fapolicyd/fapolicyd.trust are still processed by the fapolicyd framework but only for ensuring backward compatibility. Bugzilla:2054741 SSL2 Client Hello has been deprecated in NSS The Transport Layer Security ( TLS ) protocol version 1.2 and earlier allow to start a negotiation with a Client Hello message formatted in a way that is backward compatible with the Secure Sockets Layer ( SSL ) protocol version 2. Support for this feature in the Network Security Services ( NSS ) library has been deprecated and it is disabled by default. Applications that require support for this feature need to use the new SSL_ENABLE_V2_COMPATIBLE_HELLO API to enable it. Support for this feature might be removed completely in future releases of Red Hat Enterprise Linux 8. Bugzilla:1645153 [1] Runtime disabling SELinux using /etc/selinux/config is now deprecated Runtime disabling SELinux using the SELINUX=disabled option in the /etc/selinux/config file has been deprecated. In RHEL 9, when you disable SELinux only through /etc/selinux/config , the system starts with SELinux enabled but with no policy loaded. If your scenario really requires to completely disable SELinux, Red Hat recommends disabling SELinux by adding the selinux=0 parameter to the kernel command line as described in the Changing SELinux modes at boot time section of the Using SELinux title. Bugzilla:1932222 The ipa SELinux module removed from selinux-policy The ipa SELinux module has been removed from the selinux-policy package because it is no longer maintained. The functionality is now included in the ipa-selinux subpackage. If your scenario requires the use of types or interfaces from the ipa module in a local SELinux policy, install the ipa-selinux package. Bugzilla:1461914 [1] TPM 1.2 is deprecated The Trusted Platform Module (TPM) secure cryptoprocessor standard was updated to version 2.0 in 2016. TPM 2.0 provides many improvements over TPM 1.2, and it is not backward compatible with the version. TPM 1.2 is deprecated in RHEL 8, and it might be removed in the major release. Bugzilla:1657927 [1] crypto-policies derived properties are now deprecated With the introduction of scopes for crypto-policies directives in custom policies, the following derived properties have been deprecated: tls_cipher , ssh_cipher , ssh_group , ike_protocol , and sha1_in_dnssec . Additionally, the use of the protocol property without specifying a scope is now deprecated as well. See the crypto-policies(7) man page for recommended replacements. Bugzilla:2011208 RHEL 8 and 9 OpenSSL certificate and signing containers are now deprecated The OpenSSL portable certificate and signing containers available in the ubi8/openssl and ubi9/openssl repositories in the Red Hat Ecosystem Catalog are now deprecated due to low demand. Jira:RHELDOCS-17974 [1] 8.3. Subscription management The deprecated --token option of subscription-manager register will stop working at the end of November 2024 The deprecated --token=<TOKEN> option of the subscription-manager register command will no longer be a supported authentication method from the end of November 2024. The default entitlement server, subscription.rhsm.redhat.com , will no longer be allowing token-based authentication. As a consequence, if you use subscription-manager register --token=<TOKEN> , the registration will fail with the following error message: To register your system, use other supported authorization methods, such as including paired options --username / --password OR --org / --activationkey with the subscription-manager register command. Bugzilla:2170082 8.4. Software management rpmbuild --sign is deprecated The rpmbuild --sign command is deprecated since RHEL 8.1. Using this command in future releases of Red Hat Enterprise Linux can result in an error. It is recommended that you use the rpmsign command instead. Bugzilla:1688849 8.5. Shells and command-line tools Setting the TMPDIR variable in the ReaR configuration file is deprecated Setting the TMPDIR environment variable in the /etc/rear/local.conf or /etc/rear/site.conf ReaR configuration file), by using a statement such as export TMPDIR=... , is deprecated. To specify a custom directory for ReaR temporary files, export the variable in the shell environment before executing ReaR. For example, enter the export TMPDIR=... statement and then enter the rear command in the same shell session or script. Jira:RHELDOCS-18049 [1] The OpenEXR component has been deprecated The OpenEXR component has been deprecated. Hence, the support for the EXR image format has been dropped from the imagecodecs module. Bugzilla:1886310 The dump utility from the dump package has been deprecated The dump utility used for backup of file systems has been deprecated and will not be available in RHEL 9. In RHEL 9, Red Hat recommends using the tar , dd , or bacula , backup utility, based on type of usage, which provides full and safe backups on ext2, ext3, and ext4 file systems. Note that the restore utility from the dump package remains available and supported in RHEL 9 and is available as the restore package. Bugzilla:1997366 [1] The hidepid=n mount option is not supported in RHEL 8 systemd The mount option hidepid=n , which controls who can access information in /proc/[pid] directories, is not compatible with systemd infrastructure provided in RHEL 8. In addition, using this option might cause certain services started by systemd to produce SELinux AVC denial messages and prevent other operations from completing. For more information, see the related Knowledgebase solution Is mounting /proc with "hidepid=2" recommended with RHEL7 and RHEL8? . Bugzilla:2038929 The /usr/lib/udev/rename_device utility has been deprecated The udev helper utility /usr/lib/udev/rename_device for renaming network interfaces has been deprecated. Bugzilla:1875485 The ABRT tool has been deprecated The Automatic Bug Reporting Tool (ABRT) for detecting and reporting application crashes has been deprecated in RHEL 8. As a replacement, use the systemd-coredump tool to log and store core dumps, which are automatically generated files after a program crashes. Bugzilla:2055826 [1] The ReaR crontab has been deprecated The /etc/cron.d/rear crontab from the rear package has been deprecated in RHEL 8 and will not be available in RHEL 9. The crontab checks every night whether the disk layout has changed, and runs rear mkrescue command if a change happened. If you require this functionality, after an upgrade to RHEL 9, configure periodic runs of ReaR manually. Bugzilla:2083301 The SQLite database backend in Bacula has been deprecated The Bacula backup system supported multiple database backends: PostgreSQL, MySQL, and SQLite. The SQLite backend has been deprecated and will become unsupported in a later release of RHEL. As a replacement, migrate to one of the other backends (PostgreSQL or MySQL) and do not use the SQLite backend in new deployments. Jira:RHEL-6859 The raw command has been deprecated The raw ( /usr/bin/raw ) command has been deprecated. Using this command in future releases of Red Hat Enterprise Linux can result in an error. Jira:RHELPLAN-133171 [1] 8.6. Infrastructure services The geoipupdate package has been deprecated The geoipupdate package requires a third-party subscription and it also downloads proprietary content. Therefore, the geoipupdate package has been deprecated, and will be removed in the major RHEL version. Bugzilla:1874892 [1] 8.7. Networking Network scripts are deprecated in RHEL 8 Network scripts are deprecated in Red Hat Enterprise Linux 8 and they are no longer provided by default. The basic installation provides a new version of the ifup and ifdown scripts which call the NetworkManager service through the nmcli tool. In Red Hat Enterprise Linux 8, to run the ifup and the ifdown scripts, NetworkManager must be running. Note that custom commands in /sbin/ifup-local , ifdown-pre-local and ifdown-local scripts are not executed. If any of these scripts are required, the installation of the deprecated network scripts in the system is still possible with the following command: The ifup and ifdown scripts link to the installed legacy network scripts. Calling the legacy network scripts shows a warning about their deprecation. Bugzilla:1647725 [1] The dropwatch tool is deprecated The dropwatch tool has been deprecated. The tool will not be supported in future releases, thus it is not recommended for new deployments. As a replacement of this package, Red Hat recommends to use the perf command line tool. For more information on using the perf command line tool, see the Getting started with Perf section on the Red Hat customer portal or the perf man page. Bugzilla:1929173 The xinetd service has been deprecated The xinetd service has been deprecated and will be removed in RHEL 9. As a replacement, use systemd . For further details, see How to convert xinetd service to systemd . Bugzilla:2009113 [1] The cgdcbxd package is deprecated Control group data center bridging exchange daemon ( cgdcbxd ) is a service to monitor data center bridging (DCB) netlink events and manage the net_prio control group subsystem. Starting with RHEL 8.5, the cgdcbxd package is deprecated and will be removed in the major RHEL release. Bugzilla:2006665 The WEP Wi-Fi connection method is deprecated The insecure wired equivalent privacy (WEP) Wi-Fi connection method is deprecated in RHEL 8 and will be removed in RHEL 9.0. For secure Wi-Fi connections, use the Wi-Fi Protected Access 3 (WPA3) or WPA2 connection methods. Bugzilla:2029338 The unsupported xt_u32 module is now deprecated Using the unsupported xt_u32 module, users of iptables can match arbitrary 32 bits in the packet header or payload. Since RHEL 8.6, the xt_u32 module is deprecated and will be removed in RHEL 9. If you use xt_u32 , migrate to the nftables packet filtering framework. For example, first change your firewall to use iptables with native matches to incrementally replace individual rules, and later use the iptables-translate and accompanying utilities to migrate to nftables . If no native match exists in nftables , use the raw payload matching feature of nftables . For details, see the raw payload expression section in the nft(8) man page. Bugzilla:2061288 8.8. Kernel The rdma_rxe Soft-RoCE driver is deprecated Software Remote Direct Memory Access over Converged Ethernet (Soft-RoCE), also known as RXE, is a feature that emulates Remote Direct Memory Access (RDMA). In RHEL 8, the Soft-RoCE feature is available as a Technology Preview. Furthermore, due to stability issues, this feature has been deprecated and will be removed in RHEL 9. Bugzilla:1878207 [1] The Linux firewire sub-system and its associated user-space components are deprecated in RHEL 8 The firewire sub-system provides interfaces to use and maintain any resources on the IEEE 1394 bus. In RHEL 9, firewire will no longer be supported in the kernel package. Note that firewire contains several user-space components provided by the libavc1394 , libdc1394 , libraw1394 packages. These packages are subject to the deprecation as well. Bugzilla:1871863 [1] Installing RHEL for Real Time 8 using diskless boot is now deprecated Diskless booting allows multiple systems to share a root file system through the network. While convenient, diskless boot is prone to introducing network latency in real-time workloads. With the 8.3 minor update of RHEL for Real Time 8, the diskless booting feature is no longer supported. Bugzilla:1748980 Kernel live patching now covers all RHEL minor releases Since RHEL 8.1, kernel live patches have been provided for selected minor release streams of RHEL covered under the Extended Update Support (EUS) policy to remediate Critical and Important Common Vulnerabilities and Exposures (CVEs). To accommodate the maximum number of concurrently covered kernels and use cases, the support window for each live patch has been decreased from 12 to 6 months for every minor, major, and zStream version of the kernel. It means that on the day a kernel live patch is released, it will cover every minor release and scheduled errata kernel delivered in the past 6 months. For more information about this feature, see Applying patches with kernel live patching . For details about available kernel live patches, see Kernel Live Patch life cycles . Bugzilla:1958250 The crash-ptdump-command package is deprecated The crash-ptdump-command package, which is a ptdump extension module for the crash utility, is deprecated and might not be available in future RHEL releases. The ptdump command fails to retrieve the log buffer when working in the Single Range Output mode and only works in the Table of Physical Addresses (ToPA) mode. crash-ptdump-command is currently not maintained upstream Bugzilla:1838927 [1] 8.9. Boot loader The kernelopts environment variable has been deprecated In RHEL 8, the kernel command-line parameters for systems using the GRUB boot loader were defined in the kernelopts environment variable. The variable was stored in the /boot/grub2/grubenv file for each kernel boot entry. However, storing the kernel command-line parameters using kernelopts was not robust. Therefore, with a future major update of RHEL, kernelopts will be removed and the kernel command-line parameters will be stored in the Boot Loader Specification (BLS) snippet instead. Bugzilla:2060759 8.10. File systems and storage Resilient Storage Add-On has been deprecated The Red Hat Enterprise Linux (RHEL) Resilient Storage Add-On has been deprecated. The Resilient Storage Add-On will no longer be supported starting with Red Hat Enterprise Linux 10 and any subsequent releases after RHEL 10. The RHEL Resilient Storage Add-On will continue to be supported with earlier versions of RHEL (7, 8, 9) and throughout their respective maintenance support lifecycles. Jira:RHELDOCS-19027 The elevator kernel command line parameter is deprecated The elevator kernel command line parameter was used in earlier RHEL releases to set the disk scheduler for all devices. In RHEL 8, the parameter is deprecated. The upstream Linux kernel has removed support for the elevator parameter, but it is still available in RHEL 8 for compatibility reasons. Note that the kernel selects a default disk scheduler based on the type of device. This is typically the optimal setting. If you require a different scheduler, Red Hat recommends that you use udev rules or the TuneD service to configure it. Match the selected devices and switch the scheduler only for those devices. For more information, see Setting the disk scheduler . Bugzilla:1665295 [1] NFSv3 over UDP has been disabled The NFS server no longer opens or listens on a User Datagram Protocol (UDP) socket by default. This change affects only NFS version 3 because version 4 requires the Transmission Control Protocol (TCP). NFS over UDP is no longer supported in RHEL 8. Bugzilla:1592011 [1] peripety is deprecated The peripety package is deprecated since RHEL 8.3. The Peripety storage event notification daemon parses system storage logs into structured storage events. It helps you investigate storage issues. Bugzilla:1871953 VDO write modes other than async are deprecated VDO supports several write modes in RHEL 8: sync async async-unsafe auto Starting with RHEL 8.4, the following write modes are deprecated: sync Devices above the VDO layer cannot recognize if VDO is synchronous, and consequently, the devices cannot take advantage of the VDO sync mode. async-unsafe VDO added this write mode as a workaround for the reduced performance of async mode, which complies to Atomicity, Consistency, Isolation, and Durability (ACID). Red Hat does not recommend async-unsafe for most use cases and is not aware of any users who rely on it. auto This write mode only selects one of the other write modes. It is no longer necessary when VDO supports only a single write mode. These write modes will be removed in a future major RHEL release. The recommended VDO write mode is now async . For more information on VDO write modes, see Selecting a VDO write mode . Jira:RHELPLAN-70700 [1] VDO manager has been deprecated The python-based VDO management software has been deprecated and will be removed from RHEL 9. In RHEL 9, it will be replaced by the LVM-VDO integration. Therefore, it is recommended to create VDO volumes using the lvcreate command. The existing volumes created using the VDO management software can be converted using the /usr/sbin/lvm_import_vdo script, provided by the lvm2 package. For more information on the LVM-VDO implementation, see Deduplicating and compressing logical volumes on RHEL . Bugzilla:1949163 cramfs has been deprecated Due to lack of users, the cramfs kernel module is deprecated. squashfs is recommended as an alternative solution. Bugzilla:1794513 [1] 8.11. High availability and clusters pcs commands that support the clufter tool have been deprecated The pcs commands that support the clufter tool for analyzing cluster configuration formats have been deprecated. These commands now print a warning that the command has been deprecated and sections related to these commands have been removed from the pcs help display and the pcs(8) man page. The following commands have been deprecated: pcs config import-cman for importing CMAN / RHEL6 HA cluster configuration pcs config export for exporting cluster configuration to a list of pcs commands which re-create the same cluster Bugzilla:1851335 [1] 8.12. Dynamic programming languages, web and database servers The mod_php module provided with PHP for use with the Apache HTTP Server has been deprecated The mod_php module provided with PHP for use with the Apache HTTP Server in RHEL 8 is available but not enabled in the default configuration. The module is no longer available in RHEL 9. Since RHEL 8, PHP scripts are run using the FastCGI Process Manager ( php-fpm ) by default. For more information, see Using PHP with the Apache HTTP Server . Bugzilla:2225332 8.13. Compilers and development tools The gdb.i686 packages are deprecated In RHEL 8.1, the 32-bit versions of the GNU Debugger (GDB), gdb.i686 , were shipped due to a dependency problem in another package. Because RHEL 8 does not support 32-bit hardware, the gdb.i686 packages are deprecated since RHEL 8.4. The 64-bit versions of GDB, gdb.x86_64 , are fully capable of debugging 32-bit applications. If you use gdb.i686 , note the following important issues: The gdb.i686 packages will no longer be updated. Users must install gdb.x86_64 instead. If you have gdb.i686 installed, installing gdb.x86_64 will cause yum to report package gdb-8.2-14.el8.x86_64 obsoletes gdb < 8.2-14.el8 provided by gdb-8.2-12.el8.i686 . This is expected. Either uninstall gdb.i686 or pass dnf the --allowerasing option to remove gdb.i686 and install gdb.x8_64 . Users will no longer be able to install the gdb.i686 packages on 64-bit systems, that is, those with the libc.so.6()(64-bit) packages. Bugzilla:1853140 [1] libdwarf has been deprecated The libdwarf library has been deprecated in RHEL 8. The library will likely not be supported in future major releases. Instead, use the elfutils and libdw libraries for applications that wish to process ELF/DWARF files. Alternatives for the libdwarf-tools dwarfdump program are the binutils readelf program or the elfutils eu-readelf program, both used by passing the --debug-dump flag. Bugzilla:1920624 8.14. Identity Management openssh-ldap has been deprecated The openssh-ldap subpackage has been deprecated in Red Hat Enterprise Linux 8 and will be removed in RHEL 9. As the openssh-ldap subpackage is not maintained upstream, Red Hat recommends using SSSD and the sss_ssh_authorizedkeys helper, which integrate better with other IdM solutions and are more secure. By default, the SSSD ldap and ipa providers read the sshPublicKey LDAP attribute of the user object, if available. Note that you cannot use the default SSSD configuration for the ad provider or IdM trusted domains to retrieve SSH public keys from Active Directory (AD), since AD does not have a default LDAP attribute to store a public key. To allow the sss_ssh_authorizedkeys helper to get the key from SSSD, enable the ssh responder by adding ssh to the services option in the sssd.conf file. See the sssd.conf(5) man page for details. To allow sshd to use sss_ssh_authorizedkeys , add the AuthorizedKeysCommand /usr/bin/sss_ssh_authorizedkeys and AuthorizedKeysCommandUser nobody options to the /etc/ssh/sshd_config file as described by the sss_ssh_authorizedkeys(1) man page. Bugzilla:1871025 DES and 3DES encryption types have been removed Due to security reasons, the Data Encryption Standard (DES) algorithm has been deprecated and disabled by default since RHEL 7. With the recent rebase of Kerberos packages, single-DES (DES) and triple-DES (3DES) encryption types have been removed from RHEL 8. If you have configured services or users to only use DES or 3DES encryption, you might experience service interruptions such as: Kerberos authentication errors unknown enctype encryption errors Kerberos Distribution Centers (KDCs) with DES-encrypted Database Master Keys ( K/M ) fail to start Perform the following actions to prepare for the upgrade: Check if your KDC uses DES or 3DES encryption with the krb5check open source Python scripts. See krb5check on GitHub. If you are using DES or 3DES encryption with any Kerberos principals, re-key them with a supported encryption type, such as Advanced Encryption Standard (AES). For instructions on re-keying, see Retiring DES from MIT Kerberos Documentation. Test independence from DES and 3DES by temporarily setting the following Kerberos options before upgrading: In /var/kerberos/krb5kdc/kdc.conf on the KDC, set supported_enctypes and do not include des or des3 . For every host, in /etc/krb5.conf and any files in /etc/krb5.conf.d , set allow_weak_crypto to false . It is false by default. For every host, in /etc/krb5.conf and any files in /etc/krb5.conf.d , set permitted_enctypes , default_tgs_enctypes , and default_tkt_enctypes , and do not include des or des3 . If you do not experience any service interruptions with the test Kerberos settings from the step, remove them and upgrade. You do not need those settings after upgrading to the latest Kerberos packages. Bugzilla:1877991 The SSSD version of libwbclient has been removed The SSSD implementation of the libwbclient package was deprecated in RHEL 8.4. As it cannot be used with recent versions of Samba, the SSSD implementation of libwbclient has now been removed. Bugzilla:1947671 Standalone use of the ctdb service has been deprecated Since RHEL 8.4, customers are advised to use the ctdb clustered Samba service only when both of the following conditions apply: The ctdb service is managed as a pacemaker resource with the resource-agent ctdb . The ctdb service uses storage volumes that contain either a GlusterFS file system provided by the Red Hat Gluster Storage product or a GFS2 file system. The stand-alone use case of the ctdb service has been deprecated and will not be included in a major release of Red Hat Enterprise Linux. For further information on support policies for Samba, see the Knowledgebase article Support Policies for RHEL Resilient Storage - ctdb General Policies . Bugzilla:1916296 [1] Limited support for FreeRADIUS In RHEL 8, the following external authentication modules are deprecated as part of the FreeRADIUS offering: The MySQL, PostgreSQL, SQlite, and unixODBC database connectors The Perl language module The REST API module Note The PAM authentication module and other authentication modules that are provided as part of the base package are not affected. You can find replacements for the deprecated modules in community-supported packages, for example in the Fedora project. In addition, the scope of support for the freeradius package will be limited to the following use cases in future RHEL releases: Using FreeRADIUS as an authentication provider with Identity Management (IdM) as the backend source of authentication. The authentication occurs through the krb5 and LDAP authentication packages or as PAM authentication in the main FreeRADIUS package. Using FreeRADIUS to provide a source-of-truth for authentication in IdM, through the Python 3 authentication package. In contrast to these deprecations, Red Hat will strengthen the support of the following external authentication modules with FreeRADIUS: Authentication based on krb5 and LDAP Python 3 authentication The focus on these integration options is in close alignment with the strategic direction of Red Hat IdM. Jira:RHELDOCS-17573 [1] Indirect AD integration with IdM via WinSync has been deprecated WinSync is no longer actively developed in RHEL 8 due to several functional limitations: WinSync supports only one Active Directory (AD) domain. Password synchronization requires installing additional software on AD Domain Controllers. For a more robust solution with better resource and security separation, Red Hat recommends using a cross-forest trust for indirect integration with Active Directory. See the Indirect integration documentation. Jira:RHELPLAN-100400 [1] Running Samba as a PDC or BDC is deprecated The classic domain controller mode that enabled administrators to run Samba as an NT4-such as primary domain controller (PDC) and backup domain controller (BDC) is deprecated. The code and settings to configure these modes will be removed in a future Samba release. As long as the Samba version in RHEL 8 provides the PDC and BDC modes, Red Hat supports these modes only in existing installations with Windows versions which support NT4 domains. Red Hat recommends not setting up a new Samba NT4 domain, because Microsoft operating systems later than Windows 7 and Windows Server 2008 R2 do not support NT4 domains. If you use the PDC to authenticate only Linux users, Red Hat suggests migrating to Red Hat Identity Management (IdM) that is included in RHEL subscriptions. However, you cannot join Windows systems to an IdM domain. Note that Red Hat continues supporting the PDC functionality IdM uses in the background. Red Hat does not support running Samba as an AD domain controller (DC). Bugzilla:1926114 The SMB1 protocol is deprecated in Samba Starting with Samba 4.11, the insecure Server Message Block version 1 (SMB1) protocol is deprecated and will be removed in a future release. To improve the security, by default, SMB1 is disabled in the Samba server and client utilities. Jira:RHELDOCS-16612 [1] 8.15. Desktop The libgnome-keyring library has been deprecated The libgnome-keyring library has been deprecated in favor of the libsecret library, as libgnome-keyring is not maintained upstream, and does not follow the necessary cryptographic policies for RHEL. The new libsecret library is the replacement that follows the necessary security standards. Bugzilla:1607766 [1] LibreOffice is deprecated The LibreOffice RPM packages are now deprecated and will be removed in a future major RHEL release. LibreOffice continues to be fully supported through the entire life cycle of RHEL 7, 8, and 9. As a replacement for the RPM packages, Red Hat recommends that you install LibreOffice from either of the following sources provided by The Document Foundation: The official Flatpak package in the Flathub repository: https://flathub.org/apps/org.libreoffice.LibreOffice . The official RPM packages: https://www.libreoffice.org/download/download-libreoffice/ . Jira:RHELDOCS-16300 [1] Several bitmap fonts have been deprecated The following bitmap font packages have been deprecated: bitmap-console-fonts bitmap-fixed-fonts bitmap-fonts-compat bitmap-lucida-typewriter-fonts Bitmap fonts have a limited pixel size. When you try to set a font size that is unavailable, the text might display in a different size or a different font, possibly a scalable one. This also decreases the rendering quality of bitmap fonts and disrupts the user experience. Additionally, the fontconfig system ignores the Portable Compiled Format (PCF), one of the major bitmap font formats, because it contains no metadata to estimate the language coverage. Note that the bitmap-fangsongti-fonts bitmap font package continues to be supported as a dependency of the Lorax tool. Jira:RHELDOCS-17623 [1] 8.16. Graphics infrastructures AGP graphics cards are no longer supported Graphics cards using the Accelerated Graphics Port (AGP) bus are not supported in Red Hat Enterprise Linux 8. Use the graphics cards with PCI-Express bus as the recommended replacement. Bugzilla:1569610 [1] Motif has been deprecated The Motif widget toolkit has been deprecated in RHEL, because development in the upstream Motif community is inactive. The following Motif packages have been deprecated, including their development and debugging variants: motif openmotif openmotif21 openmotif22 Additionally, the motif-static package has been removed. Red Hat recommends using the GTK toolkit as a replacement. GTK is more maintainable and provides new features compared to Motif. Jira:RHELPLAN-98983 [1] 8.17. The web console The web console no longer supports incomplete translations The RHEL web console no longer provides translations for languages that have translations available for less than 50 % of the Console's translatable strings. If the browser requests translation to such a language, the user interface will be in English instead. Bugzilla:1666722 The remotectl command is deprecated The remotectl command has been deprecated and will not be available in future releases of RHEL. You can use the cockpit-certificate-ensure command as a replacement. However, note that cockpit-certificate-ensure does not have feature parity with remotectl . It does not support bundled certificates and keychain files and requires them to be split out. Jira:RHELPLAN-147538 [1] 8.18. Red Hat Enterprise Linux System Roles The network System Role displays a deprecation warning when configuring teams on RHEL 9 nodes The network teaming capabilities have been deprecated in RHEL 9. As a result, using the network RHEL System Role on an RHEL 8 control node to configure a network team on RHEL 9 nodes, shows a warning about the deprecation. Bugzilla:2021685 Ansible Engine has been deprecated versions of RHEL 8 provided access to an Ansible Engine repository, with a limited scope of support, to enable supported RHEL Automation use cases, such as RHEL System Roles and Insights remedations. Ansible Engine has been deprecated, and Ansible Engine 2.9 will have no support after September 29, 2023. For more details on the supported use cases, see Scope of support for the Ansible Core package included in the RHEL 9 AppStream . Users must manually migrate their systems from Ansible Engine to Ansible Core. For that, follow the steps: Procedure Check if the system is running RHEL 8.7 or a later release: Uninstall Ansible Engine 2.9: Disable the ansible-2-for-rhel-8-x86_64-rpms repository: Install the Ansible Core package from the RHEL 8 AppStream repository: For more details, see: Using Ansible in RHEL 8.6 and later . Bugzilla:2006081 The mssql_ha_cluster_run_role has been deprecated The mssql_ha_cluster_run_role variable has been deprecated. Instead, use the mssql_manage_ha_cluster variable. Jira:RHEL-19203 8.19. Virtualization virsh iface-* commands have become deprecated The virsh iface-* commands, such as virsh iface-start and virsh iface-destroy , are now deprecated, and will be removed in a future major version of RHEL. In addition, these commands frequently fail due to configuration dependencies. Therefore, it is recommended not to use virsh iface-* commands for configuring and managing host network connections. Instead, use the NetworkManager program and its related management applications, such as nmcli . Bugzilla:1664592 [1] virt-manager has been deprecated The Virtual Machine Manager application, also known as virt-manager , has been deprecated. The RHEL web console, also known as Cockpit , is intended to become its replacement in a subsequent release. It is, therefore, recommended that you use the web console for managing virtualization in a GUI. Note, however, that some features available in virt-manager might not be yet available in the RHEL web console. Jira:RHELPLAN-10304 [1] Limited support for virtual machine snapshots Creating snapshots of virtual machines (VMs) is currently only supported for VMs not using the UEFI firmware. In addition, during the snapshot operation, the QEMU monitor might become blocked, which negatively impacts the hypervisor performance for certain workloads. Also note that the current mechanism of creating VM snapshots has been deprecated, and Red Hat does not recommend using VM snapshots in a production environment. Bugzilla:1686057 The Cirrus VGA virtual GPU type has been deprecated With a future major update of Red Hat Enterprise Linux, the Cirrus VGA GPU device will no longer be supported in KVM virtual machines. Therefore, Red Hat recommends using the stdvga or virtio-vga devices instead of Cirrus VGA . Bugzilla:1651994 [1] SPICE has been deprecated The SPICE remote display protocol has become deprecated. Note that SPICE will remain supported in RHEL 8, but Red Hat recommends using alternate solutions for remote display streaming: For remote console access, use the VNC protocol. For advanced remote display functions, use third party tools such as RDP, HP RGS, or Mechdyne TGX. Bugzilla:1849563 [1] KVM on IBM POWER has been deprecated Using KVM virtualization on IBM POWER hardware has become deprecated. As a result, KVM on IBM POWER is still supported in RHEL 8, but will become unsupported in a future major release of RHEL. Jira:RHELPLAN-71200 [1] SecureBoot image verification using SHA1-based signatures is deprecated Performing SecureBoot image verification using SHA1-based signatures on UEFI (PE/COFF) executables has become deprecated. Instead, Red Hat recommends using signatures based on the SHA-2 algorithm, or later. Bugzilla:1935497 [1] Using SPICE to attach smart card readers to virtual machines has been deprecated The SPICE remote display protocol has been deprecated in RHEL 8. Since the only recommended way to attach smart card readers to virtual machines (VMs) depends on the SPICE protocol, the usage of smart cards in VMs has also become deprecated in RHEL 8. In a future major version of RHEL, the functionality of attaching smart card readers to VMs will only be supported by third party remote visualization solutions. Bugzilla:2059626 RDMA-based live migration is deprecated With this update, migrating running virtual machines using Remote Direct Memory Access (RDMA) has become deprecated. As a result, it is still possible to use the rdma:// migration URI to request migration over RDMA, but this feature will become unsupported in a future major release of RHEL. Jira:RHELPLAN-153267 [1] 8.20. Containers GIMP flatpak and GIMP modules are deprecated GIMP flatpak and GIMP modules, both representing GNU Image Manipulation Program for raster graphics, are now marked as deprecated because of the End of Life (EOL) of Python 2 and will be removed in the major RHEL release. As a replacement, you can use upstream flatpak versions based on Python 3. Jira:RHEL-18958 The Podman varlink-based API v1.0 has been removed The Podman varlink-based API v1.0 was deprecated in a release of RHEL 8. Podman v2.0 introduced a new Podman v2.0 RESTful API. With the release of Podman v3.0, the varlink-based API v1.0 has been completely removed. Jira:RHELPLAN-45858 [1] container-tools:1.0 has been deprecated The container-tools:1.0 module has been deprecated and will no longer receive security updates. It is recommended to use a newer supported stable module stream, such as container-tools:2.0 or container-tools:3.0 . Jira:RHELPLAN-59825 [1] The container-tools:2.0 module has been deprecated The container-tools:2.0 module has been deprecated and will no longer receive security updates. It is recommended to use a newer supported stable module stream, such as container-tools:3.0 . Jira:RHELPLAN-85066 [1] Flatpak images except GIMP has been deprecated The rhel8/firefox-flatpak , rhel8/thunderbird-flatpak , rhel8/inkscape-flatpak , and rhel8/libreoffice-flatpak RHEL 8 Flatpak Applications have been deprecated and replaced by the RHEL 9 versions. The rhel8/gimp-flatpak Flatpak Application is not deprecated because there is no replacement yet in RHEL 9. Bugzilla:2142499 The CNI network stack has been deprecated The Container Network Interface (CNI) network stack is deprecated and will be removed from Podman in a future minor release of RHEL. Previously, containers connected to the single Container Network Interface (CNI) plugin only via DNS. Podman v.4.0 introduced a new Netavark network stack. You can use the Netavark network stack with Podman and other Open Container Initiative (OCI) container management applications. The Netavark network stack for Podman is also compatible with advanced Docker functionalities. Containers in multiple networks can access containers on any of those networks. For more information, see Switching the network stack from CNI to Netavark . Jira:RHELDOCS-16755 [1] container-tools:3.0 has been deprecated The container-tools:3.0 module has been deprecated and will no longer receive security updates. To continue to build and run Linux Containers on RHEL, use a newer, stable, and supported module stream, such as container-tools:4.0 . For instructions on switching to a later stream, see Switching to a later stream . Jira:RHELPLAN-146398 [1] The rhel8/openssl has been deprecated The rhel8/openssl container image has been deprecated. Jira:RHELDOCS-18107 [1] The Inkscape and LibreOffice Flatpak images are deprecated The rhel9/inkscape-flatpak and rhel9/libreoffice-flatpak Flatpak images, which are available as Technology Previews, have been deprecated. Red Hat recommends the following alternatives to these images: To replace rhel9/inkscape-flatpak , use the inkscape RPM package. To replace rhel9/libreoffice-flatpak , see the LibreOffice deprecation release note . Jira:RHELDOCS-17102 [1] pasta as a network name has been deprecated The support for pasta as a network name value is deprecated and will not be accepted in the major release of Podman, version 5.0. You can use the pasta network name value to create a unique network mode within Podman by employing the podman run --network and podman create --network commands. Jira:RHELDOCS-17038 [1] The BoltDB database backend has been deprecated The BoltDB database backend is deprecated as of RHEL 8.10. In a future version of RHEL, the BoltDB database backend will be removed and will no longer be available to Podman. For Podman, use the SQLite database backend, which is now the default as of RHEL 8.10. Jira:RHELDOCS-17461 [1] The CNI network stack has been deprecated The Container Network Interface (CNI) network stack is deprecated and will be removed in a future release. Use the Netavark network stack instead. For more information, see Switching the network stack from CNI to Netavark . Jira:RHELDOCS-17518 [1] container-tools:4.0 has been deprecated The container-tools:4.0 module has been deprecated and will no longer receive security updates. To continue to build and run Linux Containers on RHEL, use the newer, stable, and supported module stream container-tools:rhel8 . For instructions on switching to a later stream, see Switching to a later stream . Jira:RHELPLAN-168223 [1] 8.21. Deprecated packages This section lists packages that have been deprecated and will probably not be included in a future major release of Red Hat Enterprise Linux. For changes to packages between RHEL 7 and RHEL 8, see Changes to packages in the Considerations in adopting RHEL 8 document. Important The support status of deprecated packages remains unchanged within RHEL 8. For more information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . The following packages have been deprecated in RHEL 8: 389-ds-base-legacy-tools abrt abrt-addon-ccpp abrt-addon-kerneloops abrt-addon-pstoreoops abrt-addon-vmcore abrt-addon-xorg abrt-cli abrt-console-notification abrt-dbus abrt-desktop abrt-gui abrt-gui-libs abrt-libs abrt-tui adobe-source-sans-pro-fonts adwaita-qt alsa-plugins-pulseaudio amanda amanda-client amanda-libs amanda-server ant-contrib antlr3 antlr32 aopalliance apache-commons-collections apache-commons-compress apache-commons-exec apache-commons-jxpath apache-commons-parent apache-ivy apache-parent apache-resource-bundles apache-sshd apiguardian arpwatch aspnetcore-runtime-3.0 aspnetcore-runtime-3.1 aspnetcore-runtime-5.0 aspnetcore-targeting-pack-3.0 aspnetcore-targeting-pack-3.1 aspnetcore-targeting-pack-5.0 assertj-core authd auto autoconf213 autogen autogen-libopts awscli base64coder bash-doc batik batik-css batik-util bea-stax bea-stax-api bind-export-devel bind-export-libs bind-libs-lite bind-pkcs11 bind-pkcs11-devel bind-pkcs11-libs bind-pkcs11-utils bind-sdb bind-sdb bind-sdb-chroot bitmap-console-fonts bitmap-fixed-fonts bitmap-fonts-compat bitmap-lucida-typewriter-fonts bluez-hid2hci boost-jam boost-signals bouncycastle bpg-algeti-fonts bpg-chveulebrivi-fonts bpg-classic-fonts bpg-courier-fonts bpg-courier-s-fonts bpg-dedaena-block-fonts bpg-dejavu-sans-fonts bpg-elite-fonts bpg-excelsior-caps-fonts bpg-excelsior-condenced-fonts bpg-excelsior-fonts bpg-fonts-common bpg-glaho-fonts bpg-gorda-fonts bpg-ingiri-fonts bpg-irubaqidze-fonts bpg-mikhail-stephan-fonts bpg-mrgvlovani-caps-fonts bpg-mrgvlovani-fonts bpg-nateli-caps-fonts bpg-nateli-condenced-fonts bpg-nateli-fonts bpg-nino-medium-cond-fonts bpg-nino-medium-fonts bpg-sans-fonts bpg-sans-medium-fonts bpg-sans-modern-fonts bpg-sans-regular-fonts bpg-serif-fonts bpg-serif-modern-fonts bpg-ucnobi-fonts brlapi-java bsh buildnumber-maven-plugin byaccj cal10n cbi-plugins cdparanoia cdparanoia-devel cdparanoia-libs cdrdao cmirror codehaus-parent codemodel compat-exiv2-026 compat-guile18 compat-hwloc1 compat-libpthread-nonshared compat-libtiff3 compat-openssl10 compat-sap-c++-11 compat-sap-c++-10 compat-sap-c++-9 createrepo_c-devel ctags ctags-etags culmus-keteryg-fonts culmus-shofar-fonts custodia cyrus-imapd-vzic dbus-c++ dbus-c++-devel dbus-c++-glib dbxtool dejavu-fonts-common dhcp-libs directory-maven-plugin directory-maven-plugin-javadoc dirsplit dleyna-connector-dbus dleyna-core dleyna-renderer dleyna-server dnssec-trigger dnssec-trigger-panel dotnet dotnet-apphost-pack-3.0 dotnet-apphost-pack-3.1 dotnet-apphost-pack-5.0 dotnet-host-fxr-2.1 dotnet-host-fxr-2.1 dotnet-hostfxr-3.0 dotnet-hostfxr-3.1 dotnet-hostfxr-5.0 dotnet-runtime-2.1 dotnet-runtime-3.0 dotnet-runtime-3.1 dotnet-runtime-5.0 dotnet-sdk-2.1 dotnet-sdk-2.1.5xx dotnet-sdk-3.0 dotnet-sdk-3.1 dotnet-sdk-5.0 dotnet-targeting-pack-3.0 dotnet-targeting-pack-3.1 dotnet-targeting-pack-5.0 dotnet-templates-3.0 dotnet-templates-3.1 dotnet-templates-5.0 dotnet5.0-build-reference-packages dptfxtract drpm drpm-devel dump dvd+rw-tools dyninst-static eclipse-ecf eclipse-ecf-core eclipse-ecf-runtime eclipse-emf eclipse-emf-core eclipse-emf-runtime eclipse-emf-xsd eclipse-equinox-osgi eclipse-jdt eclipse-license eclipse-p2-discovery eclipse-pde eclipse-platform eclipse-swt ed25519-java ee4j-parent elfutils-devel-static elfutils-libelf-devel-static emacs-terminal emoji-picker enca enca-devel environment-modules-compat evince-browser-plugin exec-maven-plugin farstream02 felix-gogo-command felix-gogo-runtime felix-gogo-shell felix-scr felix-osgi-compendium felix-osgi-core felix-osgi-foundation felix-parent file-roller fipscheck fipscheck-devel fipscheck-lib firewire fonts-tweak-tool forge-parent freeradius-mysql freeradius-perl freeradius-postgresql freeradius-rest freeradius-sqlite freeradius-unixODBC fuse-sshfs fusesource-pom future gamin gamin-devel gavl gcc-toolset-9 gcc-toolset-9-annobin gcc-toolset-9-build gcc-toolset-9-perftools gcc-toolset-9-runtime gcc-toolset-9-toolchain gcc-toolset-10 gcc-toolset-10-annobin gcc-toolset-10-binutils gcc-toolset-10-binutils-devel gcc-toolset-10-build gcc-toolset-10-dwz gcc-toolset-10-dyninst gcc-toolset-10-dyninst-devel gcc-toolset-10-elfutils gcc-toolset-10-elfutils-debuginfod-client gcc-toolset-10-elfutils-debuginfod-client-devel gcc-toolset-10-elfutils-devel gcc-toolset-10-elfutils-libelf gcc-toolset-10-elfutils-libelf-devel gcc-toolset-10-elfutils-libs gcc-toolset-10-gcc gcc-toolset-10-gcc-c++ gcc-toolset-10-gcc-gdb-plugin gcc-toolset-10-gcc-gfortran gcc-toolset-10-gdb gcc-toolset-10-gdb-doc gcc-toolset-10-gdb-gdbserver gcc-toolset-10-libasan-devel gcc-toolset-10-libatomic-devel gcc-toolset-10-libitm-devel gcc-toolset-10-liblsan-devel gcc-toolset-10-libquadmath-devel gcc-toolset-10-libstdc++-devel gcc-toolset-10-libstdc++-docs gcc-toolset-10-libtsan-devel gcc-toolset-10-libubsan-devel gcc-toolset-10-ltrace gcc-toolset-10-make gcc-toolset-10-make-devel gcc-toolset-10-perftools gcc-toolset-10-runtime gcc-toolset-10-strace gcc-toolset-10-systemtap gcc-toolset-10-systemtap-client gcc-toolset-10-systemtap-devel gcc-toolset-10-systemtap-initscript gcc-toolset-10-systemtap-runtime gcc-toolset-10-systemtap-sdt-devel gcc-toolset-10-systemtap-server gcc-toolset-10-toolchain gcc-toolset-10-valgrind gcc-toolset-10-valgrind-devel gcc-toolset-11-make-devel gcc-toolset-12-annobin-annocheck gcc-toolset-12-annobin-docs gcc-toolset-12-annobin-plugin-gcc gcc-toolset-12-binutils gcc-toolset-12-binutils-devel gcc-toolset-12-binutils-gold GConf2 GConf2-devel gegl genisoimage genwqe-tools genwqe-vpd genwqe-zlib genwqe-zlib-devel geoipupdate geronimo-annotation geronimo-jms geronimo-jpa geronimo-parent-poms gfbgraph gflags gflags-devel glassfish-annotation-api glassfish-el glassfish-fastinfoset glassfish-jaxb-core glassfish-jaxb-txw2 glassfish-jsp glassfish-jsp-api glassfish-legal glassfish-master-pom glassfish-servlet-api glew-devel glib2-fam glog glog-devel gmock gmock-devel gnome-abrt gnome-boxes gnome-menus-devel gnome-online-miners gnome-shell-extension-disable-screenshield gnome-shell-extension-horizontal-workspaces gnome-shell-extension-no-hot-corner gnome-shell-extension-window-grouper gnome-themes-standard gnu-free-fonts-common gnu-free-mono-fonts gnu-free-sans-fonts gnu-free-serif-fonts gnupg2-smime gnuplot gnuplot-common gobject-introspection-devel google-droid-kufi-fonts google-gson google-noto-kufi-arabic-fonts google-noto-naskh-arabic-fonts google-noto-naskh-arabic-ui-fonts google-noto-nastaliq-urdu-fonts google-noto-sans-balinese-fonts google-noto-sans-bamum-fonts google-noto-sans-batak-fonts google-noto-sans-buginese-fonts google-noto-sans-buhid-fonts google-noto-sans-canadian-aboriginal-fonts google-noto-sans-cham-fonts google-noto-sans-cuneiform-fonts google-noto-sans-cypriot-fonts google-noto-sans-gothic-fonts google-noto-sans-gurmukhi-ui-fonts google-noto-sans-hanunoo-fonts google-noto-sans-inscriptional-pahlavi-fonts google-noto-sans-inscriptional-parthian-fonts google-noto-sans-javanese-fonts google-noto-sans-lepcha-fonts google-noto-sans-limbu-fonts google-noto-sans-linear-b-fonts google-noto-sans-lisu-fonts google-noto-sans-mandaic-fonts google-noto-sans-meetei-mayek-fonts google-noto-sans-mongolian-fonts google-noto-sans-myanmar-fonts google-noto-sans-myanmar-ui-fonts google-noto-sans-new-tai-lue-fonts google-noto-sans-ogham-fonts google-noto-sans-ol-chiki-fonts google-noto-sans-old-italic-fonts google-noto-sans-old-persian-fonts google-noto-sans-oriya-fonts google-noto-sans-oriya-ui-fonts google-noto-sans-phags-pa-fonts google-noto-sans-rejang-fonts google-noto-sans-runic-fonts google-noto-sans-samaritan-fonts google-noto-sans-saurashtra-fonts google-noto-sans-sundanese-fonts google-noto-sans-syloti-nagri-fonts google-noto-sans-syriac-eastern-fonts google-noto-sans-syriac-estrangela-fonts google-noto-sans-syriac-western-fonts google-noto-sans-tagalog-fonts google-noto-sans-tagbanwa-fonts google-noto-sans-tai-le-fonts google-noto-sans-tai-tham-fonts google-noto-sans-tai-viet-fonts google-noto-sans-tibetan-fonts google-noto-sans-tifinagh-fonts google-noto-sans-ui-fonts google-noto-sans-yi-fonts google-noto-serif-bengali-fonts google-noto-serif-devanagari-fonts google-noto-serif-gujarati-fonts google-noto-serif-kannada-fonts google-noto-serif-malayalam-fonts google-noto-serif-tamil-fonts google-noto-serif-telugu-fonts gphoto2 graphviz-ruby gsl-devel gssntlmssp gtest gtest-devel gtkmm24 gtkmm24-devel gtkmm24-docs gtksourceview3 gtksourceview3-devel gtkspell gtkspell-devel gtkspell3 guile gutenprint-gimp gutenprint-libs-ui gvfs-afc gvfs-afp gvfs-archive hamcrest-core hawtjni hawtjni hawtjni-runtime HdrHistogram HdrHistogram-javadoc highlight-gui hivex-devel hostname hplip-gui hspell httpcomponents-project hwloc-plugins hyphen-fo hyphen-grc hyphen-hsb hyphen-ia hyphen-is hyphen-ku hyphen-mi hyphen-mn hyphen-sa hyphen-tk ibus-sayura icedax icu4j idm-console-framework inkscape inkscape-docs inkscape-view iptables ipython isl isl-devel isorelax istack-commons-runtime istack-commons-tools iwl3945-firmware iwl4965-firmware iwl6000-firmware jacoco jaf jaf-javadoc jakarta-oro janino jansi-native jarjar java-1.8.0-ibm java-1.8.0-ibm-demo java-1.8.0-ibm-devel java-1.8.0-ibm-headless java-1.8.0-ibm-jdbc java-1.8.0-ibm-plugin java-1.8.0-ibm-src java-1.8.0-ibm-webstart java-1.8.0-openjdk-accessibility java-1.8.0-openjdk-accessibility-slowdebug java_cup java-atk-wrapper javacc javacc-maven-plugin javaewah javaparser javapoet javassist javassist-javadoc jaxen jboss-annotations-1.2-api jboss-interceptors-1.2-api jboss-logmanager jboss-parent jctools jdepend jdependency jdom jdom2 jetty jetty-continuation jetty-http jetty-io jetty-security jetty-server jetty-servlet jetty-util jffi jflex jgit jline jmc jnr-netdb jolokia-jvm-agent js-uglify jsch json_simple jss-javadoc jtidy junit5 jvnet-parent jzlib kernel-cross-headers khmeros-fonts-common ksc kurdit-unikurd-web-fonts kyotocabinet-libs langtable-data ldapjdk-javadoc lensfun lensfun-devel lftp-scripts libaec libaec-devel libappindicator-gtk3 libappindicator-gtk3-devel libatomic-static libavc1394 libblocksruntime libcacard libcacard-devel libcgroup libcgroup-pam libcgroup-tools libchamplain libchamplain-devel libchamplain-gtk libcroco libcroco-devel libcxl libcxl-devel libdap libdap-devel libdazzle-devel libdbusmenu libdbusmenu-devel libdbusmenu-doc libdbusmenu-gtk3 libdbusmenu-gtk3-devel libdc1394 libdnet libdnet-devel libdv libdwarf libdwarf-devel libdwarf-static libdwarf-tools libeasyfc libeasyfc-gobject libepubgen-devel libertas-sd8686-firmware libertas-usb8388-firmware libertas-usb8388-olpc-firmware libgdither libGLEW libgovirt libguestfs-benchmarking libguestfs-devel libguestfs-gfs2 libguestfs-gobject libguestfs-gobject-devel libguestfs-java libguestfs-java-devel libguestfs-javadoc libguestfs-man-pages-ja libguestfs-man-pages-uk libguestfs-tools libguestfs-tools-c libhugetlbfs libhugetlbfs-devel libhugetlbfs-utils libicu-doc libIDL libIDL-devel libidn libiec61883 libindicator-gtk3 libindicator-gtk3-devel libiscsi-devel libjose-devel libkkc libkkc-common libkkc-data libldb-devel liblogging libluksmeta-devel libmalaga libmcpp libmemcached libmemcached-libs libmetalink libmodulemd1 libmongocrypt libmtp-devel libmusicbrainz5 libmusicbrainz5-devel libnbd-devel libnice libnice-gstreamer1 liboauth liboauth-devel libpfm-static libpng12 libpsm2-compat libpurple libpurple-devel libraw1394 libreport-plugin-mailx libreport-plugin-rhtsupport libreport-plugin-ureport libreport-rhel libreport-rhel-bugzilla librpmem librpmem-debug librpmem-devel libsass libsass-devel libselinux-python libsqlite3x libtalloc-devel libtar libtdb-devel libtevent-devel libtpms-devel libunwind libusal libvarlink libverto-libevent libvirt-admin libvirt-bash-completion libvirt-daemon-driver-storage-gluster libvirt-daemon-driver-storage-iscsi-direct libvirt-devel libvirt-docs libvirt-gconfig libvirt-gobject libvirt-lock-sanlock libvirt-wireshark libvmem libvmem-debug libvmem-devel libvmmalloc libvmmalloc-debug libvmmalloc-devel libvncserver libwinpr-devel libwmf libwmf-devel libwmf-lite libXNVCtrl libyami log4j12 log4j12-javadoc lohit-malayalam-fonts lohit-nepali-fonts lorax-composer lua-guestfs lucene lucene-analysis lucene-analyzers-smartcn lucene-queries lucene-queryparser lucene-sandbox lz4-java lz4-java-javadoc mailman mailx make-devel malaga malaga-suomi-voikko marisa maven-antrun-plugin maven-assembly-plugin maven-clean-plugin maven-dependency-analyzer maven-dependency-plugin maven-doxia maven-doxia-sitetools maven-install-plugin maven-invoker maven-invoker-plugin maven-parent maven-plugins-pom maven-reporting-api maven-reporting-impl maven-resolver-api maven-resolver-connector-basic maven-resolver-impl maven-resolver-spi maven-resolver-transport-wagon maven-resolver-util maven-scm maven-script-interpreter maven-shade-plugin maven-shared maven-verifier maven-wagon-file maven-wagon-http maven-wagon-http-shared maven-wagon-provider-api maven2 meanwhile mercurial mercurial-hgk metis metis-devel mingw32-bzip2 mingw32-bzip2-static mingw32-cairo mingw32-expat mingw32-fontconfig mingw32-freetype mingw32-freetype-static mingw32-gstreamer1 mingw32-harfbuzz mingw32-harfbuzz-static mingw32-icu mingw32-libjpeg-turbo mingw32-libjpeg-turbo-static mingw32-libpng mingw32-libpng-static mingw32-libtiff mingw32-libtiff-static mingw32-openssl mingw32-readline mingw32-sqlite mingw32-sqlite-static mingw64-adwaita-icon-theme mingw64-bzip2 mingw64-bzip2-static mingw64-cairo mingw64-expat mingw64-fontconfig mingw64-freetype mingw64-freetype-static mingw64-gstreamer1 mingw64-harfbuzz mingw64-harfbuzz-static mingw64-icu mingw64-libjpeg-turbo mingw64-libjpeg-turbo-static mingw64-libpng mingw64-libpng-static mingw64-libtiff mingw64-libtiff-static mingw64-nettle mingw64-openssl mingw64-readline mingw64-sqlite mingw64-sqlite-static modello mojo-parent mongo-c-driver mousetweaks mozjs52 mozjs52-devel mozjs60 mozjs60-devel mozvoikko msv-javadoc msv-manual munge-maven-plugin mythes-lb mythes-mi mythes-ne nafees-web-naskh-fonts nbd nbdkit-devel nbdkit-example-plugins nbdkit-gzip-plugin nbdkit-plugin-python-common nbdkit-plugin-vddk ncompress ncurses-compat-libs net-tools netcf netcf-devel netcf-libs network-scripts network-scripts-ppp nkf nodejs-devel nodejs-packaging nss_nis nss-pam-ldapd objectweb-asm objectweb-asm-javadoc objectweb-pom ocaml-bisect-ppx ocaml-camlp4 ocaml-camlp4-devel ocaml-lwt ocaml-mmap ocaml-ocplib-endian ocaml-ounit ocaml-result ocaml-seq opencryptoki-tpmtok opencv-contrib opencv-core opencv-devel openhpi openhpi-libs OpenIPMI-perl openssh-cavs openssh-ldap openssl-ibmpkcs11 opentest4j os-maven-plugin overpass-mono-fonts pakchois pandoc paps-libs paranamer paratype-pt-sans-caption-fonts parfait parfait-examples parfait-javadoc pcp-parfait-agent pcp-pmda-rpm pcp-pmda-vmware pcsc-lite-doc peripety perl-B-Debug perl-B-Lint perl-Class-Factory-Util perl-Class-ISA perl-DateTime-Format-HTTP perl-DateTime-Format-Mail perl-File-CheckTree perl-homedir perl-libxml-perl perl-Locale-Codes perl-Mozilla-LDAP perl-NKF perl-Object-HashBase-tools perl-Package-DeprecationManager perl-Pod-LaTeX perl-Pod-Plainer perl-prefork perl-String-CRC32 perl-SUPER perl-Sys-Virt perl-tests perl-YAML-Syck phodav php-recode php-xmlrpc pidgin pidgin-devel pidgin-sipe pinentry-emacs pinentry-gtk pipewire0.2-devel pipewire0.2-libs platform-python-coverage plexus-ant-factory plexus-bsh-factory plexus-cli plexus-component-api plexus-component-factories-pom plexus-components-pom plexus-i18n plexus-interactivity plexus-pom plexus-velocity plymouth-plugin-throbgress pmreorder postgresql-test-rpm-macros powermock prometheus-jmx-exporter prometheus-jmx-exporter-openjdk11 ptscotch-mpich ptscotch-mpich-devel ptscotch-mpich-devel-parmetis ptscotch-openmpi ptscotch-openmpi-devel purple-sipe pygobject2-doc pygtk2 pygtk2-codegen pygtk2-devel pygtk2-doc python-nose-docs python-nss-doc python-podman-api python-psycopg2-doc python-pymongo-doc python-redis python-schedutils python-slip python-sqlalchemy-doc python-varlink python-virtualenv-doc python2-backports python2-backports-ssl_match_hostname python2-bson python2-coverage python2-docs python2-docs-info python2-funcsigs python2-ipaddress python2-mock python2-nose python2-numpy-doc python2-psycopg2-debug python2-psycopg2-tests python2-pymongo python2-pymongo-gridfs python2-pytest-mock python2-sqlalchemy python2-tools python2-virtualenv python3-bson python3-click python3-coverage python3-cpio python3-custodia python3-docs python3-flask python3-gevent python3-gobject-base python3-hivex python3-html5lib python3-hypothesis python3-ipatests python3-itsdangerous python3-jwt python3-libguestfs python3-mock python3-networkx-core python3-nose python3-nss python3-openipmi python3-pillow python3-ptyprocess python3-pydbus python3-pymongo python3-pymongo-gridfs python3-pyOpenSSL python3-pytoml python3-reportlab python3-schedutils python3-scons python3-semantic_version python3-slip python3-slip-dbus python3-sqlalchemy python3-syspurpose python3-virtualenv python3-webencodings python3-werkzeug python38-asn1crypto python38-numpy-doc python38-psycopg2-doc python38-psycopg2-tests python39-numpy-doc python39-psycopg2-doc python39-psycopg2-tests qemu-kvm-block-gluster qemu-kvm-block-iscsi qemu-kvm-block-ssh qemu-kvm-hw-usbredir qemu-kvm-device-display-virtio-gpu-gl qemu-kvm-device-display-virtio-gpu-pci-gl qemu-kvm-device-display-virtio-vga-gl qemu-kvm-tests qpdf qpdf-doc qperf qpid-proton qrencode qrencode-devel qrencode-libs qt5-qtcanvas3d qt5-qtcanvas3d-examples rarian rarian-compat re2c recode redhat-lsb redhat-lsb-core redhat-lsb-cxx redhat-lsb-desktop redhat-lsb-languages redhat-lsb-printing redhat-lsb-submod-multimedia redhat-lsb-submod-security redhat-lsb-supplemental redhat-lsb-trialuse redhat-menus redhat-support-lib-python redhat-support-tool reflections regexp relaxngDatatype resteasy-javadoc rhsm-gtk rpm-plugin-prioreset rpmemd rsyslog-udpspoof ruby-hivex ruby-libguestfs rubygem-abrt rubygem-abrt-doc rubygem-bson rubygem-bson-doc rubygem-bundler-doc rubygem-mongo rubygem-mongo-doc rubygem-net-telnet rubygem-xmlrpc s390utils-cmsfs samba-pidl samba-test samba-test-libs samyak-devanagari-fonts samyak-fonts-common samyak-gujarati-fonts samyak-malayalam-fonts samyak-odia-fonts samyak-tamil-fonts sane-frontends sanlk-reset sat4j scala scotch scotch-devel SDL_sound selinux-policy-minimum sendmail sgabios sgabios-bin shim-ia32 shrinkwrap sil-padauk-book-fonts sisu-inject sisu-mojos sisu-plexus skkdic SLOF smc-anjalioldlipi-fonts smc-dyuthi-fonts smc-fonts-common smc-kalyani-fonts smc-raghumalayalam-fonts smc-suruma-fonts softhsm-devel sonatype-oss-parent sonatype-plugins-parent sos-collector sparsehash-devel spax spec-version-maven-plugin spice spice-client-win-x64 spice-client-win-x86 spice-glib spice-glib-devel spice-gtk spice-gtk-tools spice-gtk3 spice-gtk3-devel spice-gtk3-vala spice-parent spice-protocol spice-qxl-wddm-dod spice-server spice-server-devel spice-qxl-xddm spice-server spice-streaming-agent spice-vdagent-win-x64 spice-vdagent-win-x86 sssd-libwbclient star stax-ex stax2-api stringtemplate stringtemplate4 subscription-manager-initial-setup-addon subscription-manager-migration subscription-manager-migration-data subversion-javahl SuperLU SuperLU-devel supermin-devel swig swig-doc swig-gdb swtpm-devel swtpm-tools-pkcs11 system-storage-manager systemd-tests tcl-brlapi testng thai-scalable-laksaman-fonts tibetan-machine-uni-fonts timedatex torque-libs tpm-quote-tools tpm-tools tpm-tools-pkcs11 treelayout trousers trousers-lib tuned-profiles-compat tuned-profiles-nfv-host-bin tuned-utils-systemtap tycho uglify-js unbound-devel univocity-output-tester univocity-parsers usbguard-notifier usbredir-devel utf8cpp uthash velocity vinagre vino virt-dib virt-p2v-maker vm-dump-metrics-devel voikko-tools vorbis-tools weld-parent wodim woodstox-core wqy-microhei-fonts wqy-unibit-fonts xdelta xmlgraphics-commons xmlstreambuffer xinetd xorg-x11-apps xorg-x11-drv-qxl xorg-x11-server-Xspice xpp3 xsane-gimp xsom xz-java xz-java-javadoc yajl-devel yp-tools ypbind ypserv zsh-html 8.22. Deprecated and unmaintained devices This section lists devices (drivers, adapters) that continue to be supported until the end of life of RHEL 8 but will likely not be supported in future major releases of this product and are not recommended for new deployments. Support for devices other than those listed remains unchanged. These are deprecated devices. are available but are no longer being tested or updated on a routine basis in RHEL 8. Red Hat might fix serious bugs, including security bugs, at its discretion. These devices should no longer be used in production, and it is likely they will be disabled in the major release. These are unmaintained devices. PCI device IDs are in the format of vendor:device:subvendor:subdevice . If no device ID is listed, all devices associated with the corresponding driver have been deprecated. To check the PCI IDs of the hardware on your system, run the lspci -nn command. Table 8.1. Deprecated devices Device ID Driver Device name hns_roce ebtables arp_tables ip_tables ip6_tables ip6_set ip_set nft_compat usnic_verbs vmw_pvrdma hfi1 bnx2 QLogic BCM5706/5708/5709/5716 Driver hpsa Hewlett-Packard Company: Smart Array Controllers 0x10df:0x0724 lpfc Emulex Corporation: OneConnect FCoE Initiator (Skyhawk) 0x10df:0xe200 lpfc Emulex Corporation: LPe15000/LPe16000 Series 8Gb/16Gb Fibre Channel Adapter 0x10df:0xf011 lpfc Emulex Corporation: Saturn: LightPulse Fibre Channel Host Adapter 0x10df:0xf015 lpfc Emulex Corporation: Saturn: LightPulse Fibre Channel Host Adapter 0x10df:0xf100 lpfc Emulex Corporation: LPe12000 Series 8Gb Fibre Channel Adapter 0x10df:0xfc40 lpfc Emulex Corporation: Saturn-X: LightPulse Fibre Channel Host Adapter 0x10df:0xe220 be2net Emulex Corporation: OneConnect NIC (Lancer) 0x1000:0x005b megaraid_sas Broadcom / LSI: MegaRAID SAS 2208 [Thunderbolt] 0x1000:0x006E mpt3sas Broadcom / LSI: SAS2308 PCI-Express Fusion-MPT SAS-2 0x1000:0x0080 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0081 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0082 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0083 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0084 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0085 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0086 mpt3sas Broadcom / LSI: SAS2308 PCI-Express Fusion-MPT SAS-2 0x1000:0x0087 mpt3sas Broadcom / LSI: SAS2308 PCI-Express Fusion-MPT SAS-2 myri10ge Myricom 10G driver (10GbE) netxen_nic QLogic/NetXen (1/10) GbE Intelligent Ethernet Driver 0x1077:0x2031 qla2xxx QLogic Corp.: ISP8324-based 16Gb Fibre Channel to PCI Express Adapter 0x1077:0x2532 qla2xxx QLogic Corp.: ISP2532-based 8Gb Fibre Channel to PCI Express HBA 0x1077:0x8031 qla2xxx QLogic Corp.: 8300 Series 10GbE Converged Network Adapter (FCoE) qla3xxx QLogic ISP3XXX Network Driver v2.03.00-k5 0x1924:0x0803 sfc Solarflare Communications: SFC9020 10G Ethernet Controller 0x1924:0x0813 sfc Solarflare Communications: SFL9021 10GBASE-T Ethernet Controller Soft-RoCE (rdma_rxe) HNS-RoCE HNS GE/10GE/25GE/50GE/100GE RDMA Network Controller liquidio Cavium LiquidIO Intelligent Server Adapter Driver liquidio_vf Cavium LiquidIO Intelligent Server Adapter Virtual Function Driver Table 8.2. Unmaintained devices Device ID Driver Device name dl2k dlci dnet hdlc_fr rdma_rxe nicvf nicpf siw e1000 Intel(R) PRO/1000 Network Driver mptbase Fusion MPT SAS Host driver mptsas Fusion MPT SAS Host driver mptscsih Fusion MPT SCSI Host driver mptspi Fusion MPT SAS Host driver 0x1000:0x0071 [a] megaraid_sas Broadcom / LSI: MR SAS HBA 2004 0x1000:0x0073 [a] megaraid_sas Broadcom / LSI: MegaRAID SAS 2008 [Falcon] 0x1000:0x0079 [a] megaraid_sas Broadcom / LSI: MegaRAID SAS 2108 [Liberator] nvmet_tcp NVMe/TCP target driver nvmet-fc NVMe/Fabrics FC target driver [a] Disabled in RHEL 8.0, re-enabled in RHEL 8.4 due to customer requests.
[ "update-crypto-policies --set LEGACY", "Token authentication not supported by the entitlement server", "yum install network-scripts", "cat /etc/redhat-release", "yum remove ansible", "subscription-manager repos --disable ansible-2-for-rhel-8-x86_64-rpms", "yum install ansible-core" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.10_release_notes/deprecated-functionality
Chapter 2. The Go compiler
Chapter 2. The Go compiler The Go compiler is a build tool and dependency manager for the Go programming language. It offers error checking and optimization of your code. 2.1. Prerequisites Go Toolset is installed. For more information, see Installing Go Toolset . 2.2. Setting up a Go workspace To compile a Go program, you need to set up a Go workspace. Procedure Create a workspace directory as a subdirectory of USDGOPATH/src . A common choice is USDHOME/go . Place your source files into your workspace directory. Set the location of your workspace directory as an environment variable to the USDHOME/.bashrc file by running: Replace < workspace_dir > with the name of your workspace directory. Additional resources The official Go workspaces documentation . 2.3. Compiling a Go program You can compile your Go program using the Go compiler. The Go compiler creates an executable binary file as a result of compiling. Prerequisites A set up Go workspace with configured modules. For information on how to set up a workspace, see Setting up a Go workspace . Procedure In your project directory, run: On Red Hat Enterprise Linux 8: Replace < output_file > with the desired name of your output file and < go_main_package > with the name of your main package. On Red Hat Enterprise Linux 9: Replace < output_file > with the desired name of your output file and < go_main_package > with the name of your main package. 2.4. Running a Go program The Go compiler creates an executable binary file as a result of compiling. Complete the following steps to execute this file and run your program. Prerequisites Your program is compiled. For more information on how to compile your program, see Compiling a Go program . Procedure To run your program, run in the directory containing the executable file: Replace < file_name > with the name of your executable file. 2.5. Installing compiled Go projects You can install already compiled Go projects to use their executable files and libraries in further Go projects. After installation, the executable files and libraries of the project are copied to according directories in the Go workspace. Its dependencies are installed as well. Prerequisites A Go workspace with configured modules. For more information, see Setting up a Go workspace . Procedure To install a Go project, run: On Red Hat Enterprise Linux 8: Replace < go_project > with the name of the Go project you want to install. On Red Hat Enterprise Linux 9: Replace < go_project > with the name of the Go project you want to install. 2.6. Downloading and installing Go projects You can download and install third-party Go projects from online resources to use their executable files and libraries in further Go projects. After installation, the executable files and libraries of the project are copied to according directories in the Go workspace. Its dependencies are installed as well. Prerequisites A Go workspace. For more information, see Setting up a Go workspace . Procedure To download and install a Go project, run: On Red Hat Enterprise Linux 8: Replace < third_party_go_project > with the name of the project you want to download. On Red Hat Enterprise Linux 9: Replace < third_party_go_project > with the name of the project you want to download. For information on possible values of third-party projects, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: 2.7. Additional resources For more information on the Go compiler, see the official Go documentation . To display the help index included in Go Toolset, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: To display documentation for specific Go packages, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: See Go packages for an overview of Go packages.
[ "echo 'export GOPATH=< workspace_dir >' >> USDHOME/.bashrc source USDHOME/.bashrc", "go build < output_file > < go_main_package >", "go build < output_file > < go_main_package >", "./< file_name >", "go install < go_project >", "go install < go_project >", "go install < third_party_go_project >", "go install < third_party_go_project >", "go help importpath", "go help importpath", "go help", "go help", "go doc < package_name >", "go doc < package_name >" ]
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_go_1.21.0_toolset/assembly_the-go-compiler_using-go-toolset
Chapter 12. Accessing the dashboard
Chapter 12. Accessing the dashboard After you have installed OpenShift AI and added users, you can access the URL for your OpenShift AI console and share the URL with the users to let them log in and work on their models. Prerequisites You have installed OpenShift AI on your OpenShift Dedicated or Red Hat OpenShift Service on Amazon Web Services (ROSA) cluster. You have added at least one user to the user group for OpenShift AI as described in Adding users . Procedure Log in to OpenShift web console. Click the application launcher ( ). Right-click on Red Hat OpenShift AI and copy the URL for your OpenShift AI instance. Provide this instance URL to your data scientists to let them log in to OpenShift AI. Verification Confirm that you and your users can log in to OpenShift AI by using the instance URL. Additional resources Logging in to OpenShift AI Adding users
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/installing_the_openshift_ai_cloud_service/accessing-the-dashboard_install
Chapter 3. Upgrading RHCS 5 to RHCS 7 involving RHEL 8 to RHEL 9 upgrades with stretch mode enabled
Chapter 3. Upgrading RHCS 5 to RHCS 7 involving RHEL 8 to RHEL 9 upgrades with stretch mode enabled You can perform an upgrade from Red Hat Ceph Storage 5 to Red Hat Ceph Storage 7 involving Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 with the stretch mode enabled. Important Upgrade to the latest version of Red Hat Ceph Storage 5 prior to upgrading to the latest version of Red Hat Ceph Storage 7. Prerequisites Red Hat Ceph Storage 5 on Red Hat Enterprise Linux 8 with necessary hosts and daemons running with stretch mode enabled. Backup of Ceph binary ( /usr/sbin/cephadm ), ceph.pub ( /etc/ceph ), and the Ceph cluster's public SSH keys from the admin node. Procedure Log into the Cephadm shell: Example Label a second node as the admin in the cluster to manage the cluster when the admin node is re-provisioned. Syntax Example Set the noout flag. Example Drain all the daemons from the host: Syntax Example The _no_schedule label is automatically applied to the host which blocks deployment. Check if all the daemons are removed from the storage cluster: Syntax Example Zap the devices so that if the hosts being drained have OSDs present, then they can be used to re-deploy OSDs when the host is added back. Syntax Example Check the status of OSD removal: Example When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster. Remove the host from the cluster: Syntax Example Re-provision the respective hosts from RHEL 8 to RHEL 9 as described in Upgrading from RHEL 8 to RHEL 9 . Run the preflight playbook with the --limit option: Syntax Example The preflight playbook installs podman , lvm2 , chronyd , and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory. Extract the cluster's public SSH keys to a folder: Syntax Example Copy Ceph cluster's public SSH keys to the re-provisioned node: Syntax Example Optional: If the removed host has a monitor daemon, then, before adding the host to the cluster, add the --unmanaged flag to monitor deployment. Syntax Add the host again to the cluster and add the labels present earlier: Syntax Optional: If the removed host had a monitor daemon deployed originally, the monitor daemon needs to be added back manually with the location attributes as described in Replacing the tiebreaker with a new monitor . Syntax Example Syntax Example Verify the daemons on the re-provisioned host running successfully with the same ceph version: Syntax Set back the monitor daemon placement to managed . Syntax Repeat the above steps for all hosts. .Arbiter monitor cannot be drained or removed from the host. Hence, the arbiter mon needs to be re-provisioned to another tie-breaker node, and then drained or removed from host as described in Replacing the tiebreaker with a new monitor . Follow the same approach to re-provision admin nodes and use a second admin node to manage clusters. Add the backup files again to the node. . Add admin nodes again to cluster using the second admin node. Set the mon deployment to unmanaged . Follow Replacing the tiebreaker with a new monitor to add back the old arbiter mon and remove the temporary mon created earlier. Unset the noout flag. Syntax Verify the Ceph version and the cluster status to ensure that all demons are working as expected after the Red Hat Enterprise Linux upgrade. Follow Upgrade a Red Hat Ceph Storage cluster using cephadm to perform Red Hat Ceph Storage 5 to Red Hat Ceph Storage 7 Upgrade.
[ "cephadm shell", "ceph orch host label add HOSTNAME _admin", "ceph orch host label add host02_admin", "ceph osd set noout", "ceph orch host drain HOSTNAME --force", "ceph orch host drain host02 --force", "ceph orch ps HOSTNAME", "ceph orch ps host02", "ceph orch device zap HOSTNAME DISK --force", "ceph orch device zap ceph-host02 /dev/vdb --force zap successful for /dev/vdb on ceph-host02", "ceph orch osd rm status", "ceph orch host rm HOSTNAME --force", "ceph orch host rm host02 --force", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --limit NEWHOST_NAME", "ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin={storage-product}\" --limit host02", "ceph cephadm get-pub-key ~/ PATH", "ceph cephadm get-pub-key ~/ceph.pub", "ssh-copy-id -f -i ~/ PATH root@ HOST_NAME_2", "ssh-copy-id -f -i ~/ceph.pub root@host02", "ceph orch apply mon PLACEMENT --unmanaged", "ceph orch host add HOSTNAME IP_ADDRESS --labels= LABELS", "ceph mon add HOSTNAME IP LOCATION", "ceph mon add ceph-host02 10.0.211.62 datacenter=DC2", "ceph orch daemon add mon HOSTNAME", "ceph orch daemon add mon ceph-host02", "ceph orch ps", "ceph orch apply mon PLACEMENT", "ceph osd unset noout" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/upgrade_guide/upgrading-rhcs5-to-rhcs7-involving-rhel8-to-rhel9-upgrades-with-stretch-mode-enabled_upgrade
4.14. Hewlett-Packard BladeSystem
4.14. Hewlett-Packard BladeSystem Table 4.15, "HP BladeSystem (Red Hat Enterprise Linux 6.4 and later)" lists the fence device parameters used by fence_hpblade , the fence agent for HP BladeSystem. Table 4.15. HP BladeSystem (Red Hat Enterprise Linux 6.4 and later) luci Field cluster.conf Attribute Description Name name The name assigned to the HP Bladesystem device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the HP BladeSystem device. IP Port (optional) ipport The TCP port to use to connect to the device. Login login The login name used to access the HP BladeSystem device. This parameter is required. Password passwd The password used to authenticate the connection to the fence device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Force Command Prompt cmd_prompt The command prompt to use. The default value is '\USD'. Missing port returns OFF instead of failure missing_as_off Missing port returns OFF instead of failure. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Figure 4.11, "HP BladeSystem" shows the configuration screen for adding an HP BladeSystem fence device. Figure 4.11. HP BladeSystem The following command creates a fence device instance for a BladeSystem device: The following is the cluster.conf entry for the fence_hpblade device:
[ "ccs -f cluster.conf --addfencedev hpbladetest1 agent=fence_hpblade cmd_prompt=c7000oa> ipaddr=192.168.0.1 login=root passwd=password123 missing_as_off=on power_wait=60", "<fencedevices> <fencedevice agent=\"fence_hpblade\" cmd_prompt=\"c7000oa>\" ipaddr=\"hpbladeaddr\" ipport=\"13456\" login=\"root\" missing_as_off=\"on\" name=\"hpbladetest1\" passwd=\"password123\" passwd_script=\"hpbladepwscr\" power_wait=\"60\"/> </fencedevices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-hpblade-CA
Chapter 27. Adding and removing tracepoints from a running perf collector without stopping or restarting perf
Chapter 27. Adding and removing tracepoints from a running perf collector without stopping or restarting perf By using the control pipe interface to enable and disable different tracepoints in a running perf collector, you can dynamically adjust what data you are collecting without having to stop or restart perf . This ensures you do not lose performance data that would have otherwise been recorded during the stopping or restarting process. 27.1. Adding tracepoints to a running perf collector without stopping or restarting perf Add tracepoints to a running perf collector using the control pipe interface to adjust the data you are recording without having to stop perf and losing performance data. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Configure the control pipe interface: Run perf record with the control file setup and events you are interested in enabling: In this example, declaring 'sched:*' after the -e option starts perf record with scheduler events. In a second terminal, start the read side of the control pipe: Starting the read side of the control pipe triggers the following message in the first terminal: In a third terminal, enable a tracepoint using the control file: This command triggers perf to scan the current event list in the control file for the declared event. If the event is present, the tracepoint is enabled and the following message appears in the first terminal: Once the tracepoint is enabled, the second terminal displays the output from perf detecting the tracepoint: 27.2. Removing tracepoints from a running perf collector without stopping or restarting perf Remove tracepoints from a running perf collector using the control pipe interface to reduce the scope of data you are collecting without having to stop perf and losing performance data. Prerequisites You have the perf user space tool installed as described in Installing perf . You have added tracepoints to a running perf collector via the control pipe interface. For more information, see Adding tracepoints to a running perf collector without stopping or restarting perf . Procedure Remove the tracepoint: Note This example assumes you have previously loaded scheduler events into the control file and enabled the tracepoint sched:sched_process_fork . This command triggers perf to scan the current event list in the control file for the declared event. If the event is present, the tracepoint is disabled and the following message appears in the terminal used to configure the control pipe:
[ "mkfifo control ack perf.pipe", "perf record --control=fifo:control,ack -D -1 --no-buffering -e ' sched:* ' -o - > perf.pipe", "cat perf.pipe | perf --no-pager script -i -", "Events disabled", "echo 'enable sched:sched_process_fork ' > control", "event sched:sched_process_fork enabled", "bash 33349 [034] 149587.674295: sched:sched_process_fork: comm=bash pid=33349 child_comm=bash child_pid=34056", "echo 'disable sched:sched_process_fork ' > control", "event sched:sched_process_fork disabled" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/turning-tracepoints-on-and-off-without-stopping-or-restarting-perf_monitoring-and-managing-system-status-and-performance
Chapter 5. Installing a cluster on OpenStack on your own infrastructure
Chapter 5. Installing a cluster on OpenStack on your own infrastructure In OpenShift Container Platform version 4.12, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.12 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 5.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 5.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 5.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 5.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 5.4. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 5.5. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 5.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.12 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 5.9. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 5.10. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 5.10.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 5.10.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 5.11. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 5.12. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. 5.13. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 5.13.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 5.2. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 5.13.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 5.3. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 5.13.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 5.4. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 5.13.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 5.5. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 5.13.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 5.6. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installation program selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. Additional networks that are attached to a control plane machine are also attached to the bootstrap node. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installation program selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 5.13.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 5.13.7. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 5.13.8. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 5.13.9. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 5.13.10. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 5.13.10.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 5.13.10.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 5.14. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the compute machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 5.15. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 5.16. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 5.17. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform postinstallation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" Optionally, you can use the inventory.yaml file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines. 5.17.1. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 5.18. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 5.19. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 5.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 5.21. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 5.22. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 5.23. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 5.24. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 5.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses .
[ "sudo subscription-manager register # If not done already", "sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already", "sudo subscription-manager repos --disable=* # If not done already", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack", "sudo alternatives --set python /usr/bin/python3", "xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-containers.yaml'", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "file <name_of_downloaded_file>", "openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"bootstrap machine\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "openstack network create --project openshift", "openstack subnet create --project openshift", "openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2", "platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "export INFRA_ID=USD(jq -r .infraID metadata.json)", "import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)", "openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>", "openstack image show <image_name>", "openstack catalog show image", "openstack token issue -c id -f value", "{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }", "for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done", "# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'", "# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'", "ansible-playbook -i inventory.yaml security-groups.yaml", "ansible-playbook -i inventory.yaml network.yaml", "openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"", "all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'", "./openshift-install wait-for install-complete --log-level debug", "ansible-playbook -i inventory.yaml bootstrap.yaml", "openstack console log show \"USDINFRA_ID-bootstrap\"", "ansible-playbook -i inventory.yaml control-plane.yaml", "openshift-install wait-for bootstrap-complete", "INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ansible-playbook -i inventory.yaml down-bootstrap.yaml", "ansible-playbook -i inventory.yaml compute-nodes.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0", "openshift-install --log-level debug wait-for install-complete" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_openstack/installing-openstack-user
Getting Started with Camel K
Getting Started with Camel K Red Hat build of Apache Camel K 1.10.5 Develop and run your first Camel K application
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/getting_started_with_camel_k/index
Chapter 47. New Drivers
Chapter 47. New Drivers Storage Drivers cxgbit libnvdimm mpt2sas nd_blk nd_btt nd_e820 nd_pmem nvme Network Drivers ath10k_core (BZ#1298484) ath10k_pci (BZ#1298484) bnxt_en (BZ#1184635) brcmfmac brcmsmac brcmutil btbcm btcoexist btintel btrtl c_can c_can_pci c_can_platform can-dev cc770 cc770_platform ems_pci ems_usb esd_usb2 fjes geneve hfi1 i40iw iwl3945 iwl4965 iwldvm iwlegacy iwlmvm iwlwifi (BZ#1298113) kvaser_pci kvaser_usb macsec mwifiex mwifiex_pcie mwifiex_sdio mwifiex_usb mwl8k peak_pci peak_usb plx_pci qed qede rdmavt rt2800lib rt2800mmio rt2800pci rt2800usb rt2x00lib rt2x00mmio rt2x00pci rt2x00usb rt61pci rt73usb rtl_pci rtl_usb rtl8187 rtl8188ee rtl8192c-common rtl8192ce rtl8192cu rtl8192de rtl8192ee rtl8192se rtl8723-common rtl8723ae rtl8723be rtl8821ae rtlwifi sja1000 sja1000_platform slcan softing uas usb_8dev vcan Graphics Drivers and Miscellaneous Drivers amdgpu amdkfd gp2ap002a00f gpio-ich gpio-viperboard idma64 int3400_thermal leds-lt3593 ledtrig-gpio nfit pci-hyperv pwm-lpss qat_c3xxx qat_c3xxxvf qat_c62x qat_c62xvf qat_dh895xccvf regmap-spi rotary_encoder rtc-rx4581 rtsx_usb rtsx_usb_sdmmc sht15 target_core_user tpm_st33zp24 tpm_st33zp24_i2c virt-dma virtio-gpu zram
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/new_drivers
probe::nfs.fop.aio_write
probe::nfs.fop.aio_write Name probe::nfs.fop.aio_write - NFS client aio_write file operation Synopsis nfs.fop.aio_write Values count read bytes parent_name parent dir name ino inode number file_name file name buf the address of buf in user space dev device identifier pos offset of the file
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-fop-aio-write
Chapter 2. Eclipse Temurin features
Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes that the latest OpenJDK 17 release of Eclipse Temurin includes, see OpenJDK 17.0.7 Released . OpenJDK enhancements OpenJDK 17 provides enhancements to features originally created in releases of OpenJDK. Certigna (Dhimyotis) root certificate authority (CA) certificate added In release OpenJDK 17.0.7, the cacerts truststore includes the Certigna (Dhimyotis) root certificate: Name: Certigna (Dhimyotis) Alias name: certignarootca Distinguished name: CN=Certigna, O=Dhimyotis, C=FR See JDK-8245654 (JDK Bug System) . New Java Flight Recorder (JFR) event jdk.InitialSecurityProperty With OpenJDK 17.0.7, the initial security properties that the java.security.Security class loads are now accessible in the new JFR event, jdk.InitialSecurityProperty . The jdk.InitialSecurityProperty event contains the following two fields: Key: The security property key. Value: The corresponding security property value. By using this new event and the existing jdk.SecurityPropertyModification event, you can now monitor security properties throughout their lifecycle. In this release, you can also print initial security properties to the standard error output stream when the -Djava.security.debug=properties property is passed to the Java virtual machine. See JDK-8292177 (JDK Bug System) . Error thrown if java.security file fails to load In releases, if OpenJDK could not load the java.security file, a hard-coded set of security properties was used. This set of properties was not fully maintained and it was unclear to the user when they were being used. Now, with OpenJDK 17.0.7, if OpenJDK cannot load the java.security file, OpenJDK displays an InternalError error message. See JDK-8155246 (JDK Bug System) . listRoots method returns all available drives on Windows In releases, the java.io.File.listRoots() method on Windows systems filtered out any disk drives that were not accessible or did not have media loaded. However, this filtering led to observable performance issues. With release OpenJDK 17.0.7, the listRoots method returns all available disk drives unfiltered. See JDK-8208077 (JDK Bug System) . Enhanced Swing platform support In earlier releases of OpenJDK, HTML object tags rendered embedded in Swing HTML components. With release OpenJDK 17.0.7, rendering only occurs if you set the new system property swing.html.object to true. By default, the swing.html.object property is set to false. JDK bug system reference ID: JDK-8296832. Revised on 2024-05-03 15:37:24 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.7/openjdk-temurin-features-17-0-7_openjdk
8.18. The Configuration Menu and Progress Screen
8.18. The Configuration Menu and Progress Screen Once you click Begin Installation at the Installation Summary screen, the progress screen appears. Red Hat Enterprise Linux reports the installation progress on the screen as it writes the selected packages to your system. Figure 8.39. Installing Packages For your reference, a complete log of your installation can be found in the /var/log/anaconda/anaconda.packaging.log file, once you reboot your system. If you chose to encrypt one or more partitions during partitioning setup, a dialog window with a progress bar will be displayed during the early stage of the installation process. This window informs that the installer is attempting to gather enough entropy (random data) to ensure that the encryption is secure. This window will disappear after 256 bits of entropy are gathered, or after 10 minutes. You can speed up the gathering process by moving your mouse or randomly typing on the keyboard. After the window disappears, the installation process will continue. Figure 8.40. Gathering Entropy for Encryption While the packages are being installed, more configuration is required. Above the installation progress bar are the Root Password and User Creation menu items. The Root Password screen is used to configure the system's root account. This account can be used to perform critical system management and administration tasks. The same tasks can also be performed with a user account with the wheel group membership; if such an user account is created during installation, setting up a root password is not mandatory. Creating a user account is optional and can be done after installation, but it is recommended to do it on this screen. A user account is used for normal work and to access the system. Best practice suggests that you always access the system through a user account, not the root account. It is possible to disable access to the Root Password or Create User screens. To do so, use a Kickstart file which includes the rootpw --lock or user --lock commands. See Section 27.3.1, "Kickstart Commands and Options" for more information these commands. 8.18.1. Set the Root Password Setting up a root account and password is an important step during your installation. The root account (also known as the superuser) is used to install packages, upgrade RPM packages, and perform most system maintenance. The root account gives you complete control over your system. For this reason, the root account is best used only to perform system maintenance or administration. See the Red Hat Enterprise Linux 7 System Administrator's Guide for more information about becoming root. Figure 8.41. Root Password Screen Note You must always set up at least one way to gain root privileges to the installed system: either using a root account, or by creating a user account with administrative privileges (member of the wheel group), or both. Click the Root Password menu item and enter your new password into the Root Password field. Red Hat Enterprise Linux displays the characters as asterisks for security. Type the same password into the Confirm field to ensure it is set correctly. After you set the root password, click Done to return to the User Settings screen. The following are the requirements and recommendations for creating a strong root password: must be at least eight characters long may contain numbers, letters (upper and lower case) and symbols is case-sensitive and should contain a mix of cases something you can remember but that is not easily guessed should not be a word, abbreviation, or number associated with you, your organization, or found in a dictionary (including foreign languages) should not be written down; if you must write it down keep it secure Note To change your root password after you have completed the installation, run the passwd command as root . If you forget the root password, see Section 32.1.3, "Resetting the Root Password" for instructions on how to use the rescue mode to set a new one. 8.18.2. Create a User Account To create a regular (non-root) user account during the installation, click User Settings on the progress screen. The Create User screen appears, allowing you to set up the regular user account and configure its parameters. Though recommended to do during installation, this step is optional and can be performed after the installation is complete. Note You must always set up at least one way to gain root privileges to the installed system: either using a root account, or by creating a user account with administrative privileges (member of the wheel group), or both. To leave the user creation screen after you have entered it, without creating a user, leave all the fields empty and click Done . Figure 8.42. User Account Configuration Screen Enter the full name and the user name in their respective fields. Note that the system user name must be shorter than 32 characters and cannot contain spaces. It is highly recommended to set up a password for the new account. When setting up a strong password even for a non-root user, follow the guidelines described in Section 8.18.1, "Set the Root Password" . Click the Advanced button to open a new dialog with additional settings. Figure 8.43. Advanced User Account Configuration By default, each user gets a home directory corresponding to their user name. In most scenarios, there is no need to change this setting. You can also manually define a system identification number for the new user and their default group by selecting the check boxes. The range for regular user IDs starts at the number 1000 . At the bottom of the dialog, you can enter the comma-separated list of additional groups, to which the new user shall belong. The new groups will be created in the system. To customize group IDs, specify the numbers in parenthesis. Note Consider setting IDs of regular users and their default groups at range starting at 5000 instead of 1000 . That is because the range reserved for system users and groups, 0 - 999 , might increase in the future and thus overlap with IDs of regular users. For creating users with custom IDs using kickstart, see user (optional) . For changing the minimum UID and GID limits after the installation, which ensures that your chosen UID and GID ranges are applied automatically on user creation, see the Users and Groups chapter of the System Administrator's Guide . Once you have customized the user account, click Save Changes to return to the User Settings screen.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-configuration-progress-menu-x86
Chapter 10. Metal3RemediationTemplate [infrastructure.cluster.x-k8s.io/v1beta1]
Chapter 10. Metal3RemediationTemplate [infrastructure.cluster.x-k8s.io/v1beta1] Description Metal3RemediationTemplate is the Schema for the metal3remediationtemplates API. Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Metal3RemediationTemplateSpec defines the desired state of Metal3RemediationTemplate. status object Metal3RemediationTemplateStatus defines the observed state of Metal3RemediationTemplate. 10.1.1. .spec Description Metal3RemediationTemplateSpec defines the desired state of Metal3RemediationTemplate. Type object Required template Property Type Description template object Metal3RemediationTemplateResource describes the data needed to create a Metal3Remediation from a template. 10.1.2. .spec.template Description Metal3RemediationTemplateResource describes the data needed to create a Metal3Remediation from a template. Type object Required spec Property Type Description spec object Spec is the specification of the desired behavior of the Metal3Remediation. 10.1.3. .spec.template.spec Description Spec is the specification of the desired behavior of the Metal3Remediation. Type object Property Type Description strategy object Strategy field defines remediation strategy. 10.1.4. .spec.template.spec.strategy Description Strategy field defines remediation strategy. Type object Property Type Description retryLimit integer Sets maximum number of remediation retries. timeout string Sets the timeout between remediation retries. type string Type of remediation. 10.1.5. .status Description Metal3RemediationTemplateStatus defines the observed state of Metal3RemediationTemplate. Type object Required status Property Type Description status object Metal3RemediationStatus defines the observed state of Metal3Remediation 10.1.6. .status.status Description Metal3RemediationStatus defines the observed state of Metal3Remediation Type object Property Type Description lastRemediated string LastRemediated identifies when the host was last remediated phase string Phase represents the current phase of machine remediation. E.g. Pending, Running, Done etc. retryCount integer RetryCount can be used as a counter during the remediation. Field can hold number of reboots etc. 10.2. API endpoints The following API endpoints are available: /apis/infrastructure.cluster.x-k8s.io/v1beta1/metal3remediationtemplates GET : list objects of kind Metal3RemediationTemplate /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates DELETE : delete collection of Metal3RemediationTemplate GET : list objects of kind Metal3RemediationTemplate POST : create a Metal3RemediationTemplate /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates/{name} DELETE : delete a Metal3RemediationTemplate GET : read the specified Metal3RemediationTemplate PATCH : partially update the specified Metal3RemediationTemplate PUT : replace the specified Metal3RemediationTemplate /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates/{name}/status GET : read status of the specified Metal3RemediationTemplate PATCH : partially update status of the specified Metal3RemediationTemplate PUT : replace status of the specified Metal3RemediationTemplate 10.2.1. /apis/infrastructure.cluster.x-k8s.io/v1beta1/metal3remediationtemplates HTTP method GET Description list objects of kind Metal3RemediationTemplate Table 10.1. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplateList schema 401 - Unauthorized Empty 10.2.2. /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates HTTP method DELETE Description delete collection of Metal3RemediationTemplate Table 10.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Metal3RemediationTemplate Table 10.3. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplateList schema 401 - Unauthorized Empty HTTP method POST Description create a Metal3RemediationTemplate Table 10.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.5. Body parameters Parameter Type Description body Metal3RemediationTemplate schema Table 10.6. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 201 - Created Metal3RemediationTemplate schema 202 - Accepted Metal3RemediationTemplate schema 401 - Unauthorized Empty 10.2.3. /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates/{name} Table 10.7. Global path parameters Parameter Type Description name string name of the Metal3RemediationTemplate HTTP method DELETE Description delete a Metal3RemediationTemplate Table 10.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Metal3RemediationTemplate Table 10.10. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Metal3RemediationTemplate Table 10.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.12. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Metal3RemediationTemplate Table 10.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.14. Body parameters Parameter Type Description body Metal3RemediationTemplate schema Table 10.15. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 201 - Created Metal3RemediationTemplate schema 401 - Unauthorized Empty 10.2.4. /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates/{name}/status Table 10.16. Global path parameters Parameter Type Description name string name of the Metal3RemediationTemplate HTTP method GET Description read status of the specified Metal3RemediationTemplate Table 10.17. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Metal3RemediationTemplate Table 10.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.19. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Metal3RemediationTemplate Table 10.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.21. Body parameters Parameter Type Description body Metal3RemediationTemplate schema Table 10.22. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 201 - Created Metal3RemediationTemplate schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/provisioning_apis/metal3remediationtemplate-infrastructure-cluster-x-k8s-io-v1beta1
Chapter 44. Message Interface
Chapter 44. Message Interface Abstract This chapter describes how to implement the Message interface, which is an optional step in the implementation of a Apache Camel component. 44.1. The Message Interface Overview An instance of org.apache.camel.Message type can represent any kind of message ( In or Out ). Figure 44.1, "Message Inheritance Hierarchy" shows the inheritance hierarchy for the message type. You do not always need to implement a custom message type for a component. In many cases, the default implementation, DefaultMessage , is adequate. Figure 44.1. Message Inheritance Hierarchy The Message interface Example 44.1, "Message Interface" shows the definition of the org.apache.camel.Message interface. Example 44.1. Message Interface Message methods The Message interface defines the following methods: setMessageId() , getMessageId() - Getter and setter methods for the message ID. Whether or not you need to use a message ID in your custom component is an implementation detail. getExchange() - Returns a reference to the parent exchange object. isFault() , setFault() - Getter and setter methods for the fault flag, which indicates whether or not this message is a fault message. getHeader() , getHeaders() , setHeader() , setHeaders() , removeHeader() , hasHeaders() - Getter and setter methods for the message headers. In general, these message headers can be used either to store actual header data, or to store miscellaneous metadata. getBody() , getMandatoryBody() , setBody() - Getter and setter methods for the message body. The getMandatoryBody() accessor guarantees that the returned body is non-null, otherwise the InvalidPayloadException exception is thrown. getAttachment() , getAttachments() , getAttachmentNames() , removeAttachment() , addAttachment() , setAttachments() , hasAttachments() - Methods to get, set, add, and remove attachments. copy() - Creates a new, identical (including the message ID) copy of the current custom message object. copyFrom() - Copies the complete contents (including the message ID) of the specified generic message object, message , into the current message instance. Because this method must be able to copy from any message type, it copies the generic message properties, but not the custom properties. createExchangeId() - Returns the unique ID for this exchange, if the message implementation is capable of providing an ID; otherwise, return null . 44.2. Implementing the Message Interface How to implement a custom message Example 44.2, "Custom Message Implementation" outlines how to implement a message by extending the DefaultMessage class. Example 44.2. Custom Message Implementation 1 Implements a custom message class, CustomMessage , by extending the org.apache.camel.impl.DefaultMessage class. 2 Typically, you need a default constructor that creates a message with default properties. 3 Override the toString() method to customize message stringification. 4 The newInstance() method is called from inside the MessageSupport.copy() method. Customization of the newInstance() method should focus on copying all of the custom properties of the current message instance into the new message instance. The MessageSupport.copy() method copies the generic message properties by calling copyFrom() . 5 The createBody() method works in conjunction with the MessageSupport.getBody() method to implement lazy access to the message body. By default, the message body is null . It is only when the application code tries to access the body (by calling getBody() ), that the body should be created. The MessageSupport.getBody() automatically calls createBody() , when the message body is accessed for the first time. 6 The populateInitialHeaders() method works in conjunction with the header getter and setter methods to implement lazy access to the message headers. This method parses the message to extract any message headers and inserts them into the hash map, map . The populateInitialHeaders() method is automatically called when a user attempts to access a header (or headers) for the first time (by calling getHeader() , getHeaders() , setHeader() , or setHeaders() ). 7 The populateInitialAttachments() method works in conjunction with the attachment getter and setter methods to implement lazy access to the attachments. This method extracts the message attachments and inserts them into the hash map, map . The populateInitialAttachments() method is automatically called when a user attempts to access an attachment (or attachments) for the first time by calling getAttachment() , getAttachments() , getAttachmentNames() , or addAttachment() .
[ "package org.apache.camel; import java.util.Map; import java.util.Set; import javax.activation.DataHandler; public interface Message { String getMessageId(); void setMessageId(String messageId); Exchange getExchange(); boolean isFault(); void setFault(boolean fault); Object getHeader(String name); Object getHeader(String name, Object defaultValue); <T> T getHeader(String name, Class<T> type); <T> T getHeader(String name, Object defaultValue, Class<T> type); Map<String, Object> getHeaders(); void setHeader(String name, Object value); void setHeaders(Map<String, Object> headers); Object removeHeader(String name); boolean removeHeaders(String pattern); boolean hasHeaders(); Object getBody(); Object getMandatoryBody() throws InvalidPayloadException; <T> T getBody(Class<T> type); <T> T getMandatoryBody(Class<T> type) throws InvalidPayloadException; void setBody(Object body); <T> void setBody(Object body, Class<T> type); DataHandler getAttachment(String id); Map<String, DataHandler> getAttachments(); Set<String> getAttachmentNames(); void removeAttachment(String id); void addAttachment(String id, DataHandler content); void setAttachments(Map<String, DataHandler> attachments); boolean hasAttachments(); Message copy(); void copyFrom(Message message); String createExchangeId(); }", "import org.apache.camel.Exchange; import org.apache.camel.impl.DefaultMessage; public class CustomMessage extends DefaultMessage { 1 public CustomMessage () { 2 // Create message with default properties } @Override public String toString() { 3 // Return a stringified message } @Override public CustomMessage newInstance() { 4 return new CustomMessage ( ... ); } @Override protected Object createBody() { 5 // Return message body (lazy creation). } @Override protected void populateInitialHeaders(Map&lt;String, Object&gt; map) { 6 // Initialize headers from underlying message (lazy creation). } @Override protected void populateInitialAttachments(Map&lt;String, DataHandler&gt; map) { 7 // Initialize attachments from underlying message (lazy creation). } }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/MessageIntf
Chapter 5. Decision Model and Notation (DMN)
Chapter 5. Decision Model and Notation (DMN) Decision Model and Notation (DMN) is a standard established by the Object Management Group (OMG) for describing and modeling operational decisions. DMN defines an XML schema that enables DMN models to be shared between DMN-compliant platforms and across organizations so that business analysts and business rules developers can collaborate in designing and implementing DMN decision services. The DMN standard is similar to and can be used together with the Business Process Model and Notation (BPMN) standard for designing and modeling business processes. For more information about the background and applications of DMN, see the OMG Decision Model and Notation specification . 5.1. Creating the traffic violations DMN decision requirements diagram (DRD) A decision requirements diagram (DRD) is a visual representation of your DMN model. Use the DMN designer in Business Central to design the DRD for the traffic violations project and to define the decision logic of the DRD components. Figure 5.1. DRD for the Traffic Violations example Prerequisites You have created the traffic violations project in Business Central. Procedure On the traffic-violation project's home page, click Add Asset . On the Add Asset page, click DMN . The Create new DMN window is opened. In the Create new DMN window, enter Traffic Violation in the DMN name field. From the Package list, select com.myspace.traffic_violation . Click Ok . The DMN asset in the DMN designer is opened. In the DMN designer canvas, drag two DMN Input Data input nodes onto the canvas. Figure 5.2. DMN Input Data nodes In the upper-right corner, click the icon. Double-click the input nodes and rename one to Driver and the other to Violation . Drag a DMN Decision decision node onto the canvas. Double-click the decision node and rename it to Fine . Click the Violation input node, select the Create DMN Information Requirement icon and click the Fine decision node to link the two nodes. Figure 5.3. Create DMN Information Requirement icon Drag a DMN Decision decision node onto the canvas. Double-click the decision node and rename it to Should the driver be suspended? . Click the Driver input node, select the Create DMN Information Requirement icon and click the Should the driver be suspended? decision node to link the two nodes. Click the Fine decision node, select the Create DMN Information Requirement icon, and select the Should the driver be suspended? decision node. Click Save . Note As you periodically save a DRD, the DMN designer performs a static validation of the DMN model and might produce error messages until the model is defined completely. After you finish defining the DMN model completely, if any errors remain, troubleshoot the specified problems accordingly. 5.2. Creating the traffic violations DMN custom data types DMN data types determine the structure of the data that you use within a table, column, or field in a DMN boxed expression for defining decision logic. You can use default DMN data types (such as string, number, or boolean) or you can create custom data types to specify additional fields and constraints that you want to implement for the boxed expression values. Use the DMN designer's Data Types tab in Business Central to define the custom data types for the traffic violations project. Figure 5.4. The custom data types tab The following tables list the tDriver , tViolation , and tFine custom data types that you will create for this project. Table 5.1. tDriver custom data type Name Type tDriver Structure Name string Age number State string City string Points number Table 5.2. tViolation custom data type Name Type tViolation Structure Code string Date date Type string Speed Limit number Actual Speed number Table 5.3. tFine custom data type Name Type tFine Structure Amount number Points number Prerequisites You created the traffic violations DMN decision requirements diagram (DRDs) in Business Central. Procedure To create the tDriver custom data type, click Add a custom Data Type on the Data Types tab, enter tDriver in the Name field, and select Structure from the Type list. Click the check mark to the right of the new data type to save your changes. Figure 5.5. The tDriver custom data type Add each of the following nested data types to the tDriver structured data type by clicking the plus sign to tDriver for each new nested data type. Click the check mark to the right of each new data type to save your changes. Name (string) Age (number) State (string) City (string) Points (number) To create the tViolation custom data type, click New Data Type , enter tViolation in the Name field, and select Structure from the Type list. Click the check mark to the right of the new data type to save your changes. Figure 5.6. The tViolation custom data type Add each of the following nested data types to the tViolation structured data type by clicking the plus sign to tViolation for each new nested data type. Click the check mark to the right of each new data type to save your changes. Code (string) Date (date) Type (string) Speed Limit (number) Actual Speed (number) To add the following constraints to the Type nested data type, click the edit icon, click Add Constraints , and select Enumeration from the Select constraint type drop-down menu. speed parking driving under the influence Click OK , then click the check mark to the right of the Type data type to save your changes. To create the tFine custom data type, click New Data Type , enter tFine in the Name field, select Structure from the Type list, and click Save . Figure 5.7. The tFine custom data type Add each of the following nested data types to the tFine structured data type by clicking the plus sign to tFine for each new nested data type. Click the check mark to the right of each new data type to save your changes. Amount (number) Points (number) Click Save . 5.3. Assigning custom data types to the DRD input and decision nodes After you create the DMN custom data types, assign them to the appropriate DMN Input Data and DMN Decision nodes in the traffic violations DRD. Prerequisites You have created the traffic violations DMN custom data types in Business Central. Procedure Click the Model tab on the DMN designer and click the Properties icon in the upper-right corner of the DMN designer to expose the DRD properties. In the DRD, select the Driver input data node and in the Properties panel, select tDriver from the Data type drop-down menu. Select the Violation input data node and select tViolation from the Data type drop-down menu. Select the Fine decision node and select tFine from the Data type drop-down menu. Select the Should the driver be suspended? decision node and set the following properties: Data type : string Question : Should the driver be suspended due to points on his driver license? Allowed Answers : Yes,No Click Save . You have assigned the custom data types to your DRD's input and decision nodes. 5.4. Defining the traffic violations DMN decision logic To calculate the fine and to decide whether the driver is to be suspended or not, you can define the traffic violations DMN decision logic using a DMN decision table and context boxed expression. Figure 5.8. Fine expression Figure 5.9. Should the driver be suspended expression Prerequisites You have assigned the DMN custom data types to the appropriate decision and input nodes in the traffic violations DRD in Business Central. Procedure To calculate the fine, in the DMN designer canvas, select the Fine decision node and click the Edit icon to open the DMN boxed expression designer. Figure 5.10. Decision node edit icon Click Select expression Decision Table . Figure 5.11. Select Decisiong Table logic type For the Violation.Date , Violation.Code , and Violation.Speed Limit columns, right-click and select Delete for each field. Click the Violation.Actual Speed column header and enter the expression Violation.Actual Speed - Violation.Speed Limit in the Expression field." Enter the following values in the first row of the decision table: Violation.Type : "speed" Violation.Actual Speed - Violation.Speed Limit : [10..30) Amount : 500 Points : 3 Right-click the first row and select Insert below to add another row. Enter the following values in the second row of the decision table: Violation.Type : "speed" Violation.Actual Speed - Violation.Speed Limit : >= 30 Amount : 1000 Points : 7 Right-click the second row and select Insert below to add another row. Enter the following values in the third row of the decision table: Violation.Type : "parking" Violation.Actual Speed - Violation.Speed Limit : - Amount : 100 Points : 1 Right-click the third row and select Insert below to add another row. Enter the following values in the fourth row of the decision table: Violation.Type : "driving under the influence" Violation.Actual Speed - Violation.Speed Limit : - Amount : 1000 Points : 5 Click Save . To define the driver suspension rule, return to the DMN designer canvas, select the Should the driver be suspended? decision node, and click the Edit icon to open the DMN boxed expression designer. Click Select expression Context . Click ContextEntry-1 , enter Total Points as the Name , and select number from the Data Type drop-down menu. Click the cell to Total Points , select Literal expression from the context menu, and enter Driver.Points + Fine.Points as the expression. In the cell below Driver.Points + Fine.Points , select Literal Expression from the context menu, and enter if Total Points >= 20 then "Yes" else "No" . Click Save . You have defined how to calculate the fine and the context for deciding when to suspend the driver. You can navigate to the traffic-violation project page and click Build to build the example project and address any errors noted in the Alerts panel.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_decision_manager/dmn-con_getting-started-decision-services
Chapter 11. Intercepting Messages
Chapter 11. Intercepting Messages With AMQ Broker you can intercept packets entering or exiting the broker, allowing you to audit packets or filter messages. Interceptors can change the packets they intercept, which makes them powerful, but also potentially dangerous. You can develop interceptors to meet your business requirements. Interceptors are protocol specific and must implement the appropriate interface. Interceptors must implement the intercept() method, which returns a boolean value. If the value is true , the message packet continues onward. If false , the process is aborted, no other interceptors are called, and the message packet is not processed further. 11.1. Creating Interceptors You can create your own incoming and outgoing interceptors. All interceptors are protocol specific and are called for any packet entering or exiting the server respectively. This allows you to create interceptors to meet business requirements such as auditing packets. Interceptors can change the packets they intercept. This makes them powerful as well as potentially dangerous, so be sure to use them with caution. Interceptors and their dependencies must be placed in the Java classpath of the broker. You can use the <broker_instance_dir> /lib directory since it is part of the classpath by default. Procedure The following examples demonstrate how to create an interceptor that checks the size of each packet passed to it. Note that the examples implement a specific interface for each protocol. Implement the appropriate interface and override its intercept() method. If you are using the AMQP protocol, implement the org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor interface. package com.example; import org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage; import org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements AmqpInterceptor { private final int ACCEPTABLE_SIZE = 1024; @Override public boolean intercept(final AMQPMessage message, RemotingConnection connection) { int size = message.getEncodeSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This AMQPMessage has an acceptable size."); return true; } return false; } } If you are using Core Protocol, your interceptor must implement the org.apache.artemis.activemq.api.core.Interceptor interface. package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This Packet has an acceptable size."); return true; } return false; } } If you are using the MQTT protocol, implement the org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor interface. package com.example; import org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor; import io.netty.handler.codec.mqtt.MqttMessage; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(MqttMessage mqttMessage, RemotingConnection connection) throws ActiveMQException { byte[] msg = (mqttMessage.toString()).getBytes(); int size = msg.length; if (size <= ACCEPTABLE_SIZE) { System.out.println("This MqttMessage has an acceptable size."); return true; } return false; } } If you are using the STOMP protocol, implement the org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor interface. package com.example; import org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor; import org.apache.activemq.artemis.core.protocol.stomp.StompFrame; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(StompFrame stompFrame, RemotingConnection connection) throws ActiveMQException { int size = stompFrame.getEncodedSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This StompFrame has an acceptable size."); return true; } return false; } } 11.2. Configuring the Broker to Use Interceptors Once you have created an interceptor, you must configure the broker to use it. Prerequisites You must create an interceptor class and add it (and its dependencies) to the Java classpath of the broker before you can configure it for use by the broker. You can use the <broker_instance_dir> /lib directory since it is part of the classpath by default. Procedure Configure the broker to use an interceptor by adding configuration to <broker_instance_dir> /etc/broker.xml If your interceptor is intended for incoming messages, add its class-name to the list of remoting-incoming-interceptors . <configuration> <core> ... <remoting-incoming-interceptors> <class-name>org.example.MyIncomingInterceptor</class-name> </remoting-incoming-interceptors> ... </core> </configuration> If your interceptor is intended for outgoing messages, add its class-name to the list of remoting-outgoing-interceptors . <configuration> <core> ... <remoting-outgoing-interceptors> <class-name>org.example.MyOutgoingInterceptor</class-name> </remoting-outgoing-interceptors> </core> </configuration> Additional resources To learn how to configure interceptors in the AMQ Core Protocol JMS client, see Using message interceptors in the AMQ Core Protocol JMS documentation.
[ "package com.example; import org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage; import org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements AmqpInterceptor { private final int ACCEPTABLE_SIZE = 1024; @Override public boolean intercept(final AMQPMessage message, RemotingConnection connection) { int size = message.getEncodeSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This AMQPMessage has an acceptable size.\"); return true; } return false; } }", "package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This Packet has an acceptable size.\"); return true; } return false; } }", "package com.example; import org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor; import io.netty.handler.codec.mqtt.MqttMessage; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(MqttMessage mqttMessage, RemotingConnection connection) throws ActiveMQException { byte[] msg = (mqttMessage.toString()).getBytes(); int size = msg.length; if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This MqttMessage has an acceptable size.\"); return true; } return false; } }", "package com.example; import org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor; import org.apache.activemq.artemis.core.protocol.stomp.StompFrame; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(StompFrame stompFrame, RemotingConnection connection) throws ActiveMQException { int size = stompFrame.getEncodedSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This StompFrame has an acceptable size.\"); return true; } return false; } }", "<configuration> <core> <remoting-incoming-interceptors> <class-name>org.example.MyIncomingInterceptor</class-name> </remoting-incoming-interceptors> </core> </configuration>", "<configuration> <core> <remoting-outgoing-interceptors> <class-name>org.example.MyOutgoingInterceptor</class-name> </remoting-outgoing-interceptors> </core> </configuration>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/interceptors
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_javascript_client/using_your_subscription
Chapter 6. Registering Hosts to Satellite
Chapter 6. Registering Hosts to Satellite When you install Satellite Server and Capsule Server, you must then register the hosts on EC2 instances to Satellite. For more information, see Registering Hosts in Managing Hosts .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/deploying_red_hat_satellite_on_amazon_web_services/aws-registering-hosts
Chapter 3. Usage
Chapter 3. Usage This chapter describes the necessary steps for rebuilding and using Red Hat Software Collections 3.2, and deploying applications that use Red Hat Software Collections. 3.1. Using Red Hat Software Collections 3.1.1. Running an Executable from a Software Collection To run an executable from a particular Software Collection, type the following command at a shell prompt: scl enable software_collection ... ' command ...' Or, alternatively, use the following command: scl enable software_collection ... -- command ... Replace software_collection with a space-separated list of Software Collections you want to use and command with the command you want to run. For example, to execute a Perl program stored in a file named hello.pl with the Perl interpreter from the perl526 Software Collection, type: You can execute any command using the scl utility, causing it to be run with the executables from a selected Software Collection in preference to their possible Red Hat Enterprise Linux system equivalents. For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.2 Components" . 3.1.2. Running a Shell Session with a Software Collection as Default To start a new shell session with executables from a selected Software Collection in preference to their Red Hat Enterprise Linux equivalents, type the following at a shell prompt: scl enable software_collection ... bash Replace software_collection with a space-separated list of Software Collections you want to use. For example, to start a new shell session with the python27 and rh-postgresql10 Software Collections as default, type: The list of Software Collections that are enabled in the current session is stored in the USDX_SCLS environment variable, for instance: For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.2 Components" . 3.1.3. Running a System Service from a Software Collection Running a System Service from a Software Collection in Red Hat Enterprise Linux 6 Software Collections that include system services install corresponding init scripts in the /etc/rc.d/init.d/ directory. To start such a service in the current session, type the following at a shell prompt as root : service software_collection - service_name start Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : chkconfig software_collection - service_name on For example, to start the postgresql service from the rh-postgresql96 Software Collection and enable it in runlevels 2, 3, 4, and 5, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 6, refer to the Red Hat Enterprise Linux 6 Deployment Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.2 Components" . Running a System Service from a Software Collection in Red Hat Enterprise Linux 7 In Red Hat Enterprise Linux 7, init scripts have been replaced by systemd service unit files, which end with the .service file extension and serve a similar purpose as init scripts. To start a service in the current session, execute the following command as root : systemctl start software_collection - service_name .service Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : systemctl enable software_collection - service_name .service For example, to start the postgresql service from the rh-postgresql10 Software Collection and enable it at boot time, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 7, refer to the Red Hat Enterprise Linux 7 System Administrator's Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.2 Components" . 3.2. Accessing a Manual Page from a Software Collection Every Software Collection contains a general manual page that describes the content of this component. Each manual page has the same name as the component and it is located in the /opt/rh directory. To read a manual page for a Software Collection, type the following command: scl enable software_collection 'man software_collection ' Replace software_collection with the particular Red Hat Software Collections component. For example, to display the manual page for rh-mariadb102 , type: 3.3. Deploying Applications That Use Red Hat Software Collections In general, you can use one of the following two approaches to deploy an application that depends on a component from Red Hat Software Collections in production: Install all required Software Collections and packages manually and then deploy your application, or Create a new Software Collection for your application and specify all required Software Collections and other packages as dependencies. For more information on how to manually install individual Red Hat Software Collections components, see Section 2.2, "Installing Red Hat Software Collections" . For further details on how to use Red Hat Software Collections, see Section 3.1, "Using Red Hat Software Collections" . For a detailed explanation of how to create a custom Software Collection or extend an existing one, read the Red Hat Software Collections Packaging Guide . 3.4. Red Hat Software Collections Container Images Container images based on Red Hat Software Collections include applications, daemons, and databases. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. For information about their usage, see Using Red Hat Software Collections 3 Container Images . For details regarding container images based on Red Hat Software Collections versions 2.4 and earlier, see Using Red Hat Software Collections 2 Container Images . The following container images are available with Red Hat Software Collections 3.2: rhscl/devtoolset-8-toolchain-rhel7 rhscl/devtoolset-8-perftools-rhel7 rhscl/httpd-24-rhel7 rhscl/mysql-80-rhel7 rhscl/nginx-114-rhel7 rhscl/php-72-rhel7 rhscl/varnish-6-rhel7 The following container images are based on Red Hat Software Collections 3.1: rhscl/devtoolset-7-toolchain-rhel7 rhscl/devtoolset-7-perftools-rhel7 rhscl/mongodb-36-rhel7 rhscl/perl-526-rhel7 rhscl/php-70-rhel7 rhscl/postgresql-10-rhel7 rhscl/ruby-25-rhel7 rhscl/varnish-5-rhel7 The following container images are based on Red Hat Software Collections 3.0: rhscl/mariadb-102-rhel7 rhscl/mongodb-34-rhel7 rhscl/nginx-112-rhel7 rhscl/nodejs-8-rhel7 rhscl/php-71-rhel7 rhscl/postgresql-96-rhel7 rhscl/python-36-rhel7 The following container images are based on Red Hat Software Collections 2.4: rhscl/devtoolset-6-toolchain-rhel7 (EOL) rhscl/devtoolset-6-perftools-rhel7 (EOL) rhscl/nginx-110-rhel7 rhscl/nodejs-6-rhel7 rhscl/python-27-rhel7 rhscl/ruby-24-rhel7 rhscl/ror-50-rhel7 rhscl/thermostat-16-agent-rhel7 (EOL) rhscl/thermostat-16-storage-rhel7 (EOL) The following container images are based on Red Hat Software Collections 2.3: rhscl/mysql-57-rhel7 rhscl/perl-524-rhel7 rhscl/redis-32-rhel7 rhscl/mongodb-32-rhel7 rhscl/php-56-rhel7 (EOL) rhscl/python-35-rhel7 rhscl/ruby-23-rhel7 The following container images are based on Red Hat Software Collections 2.2: rhscl/devtoolset-4-toolchain-rhel7 (EOL) rhscl/devtoolset-4-perftools-rhel7 (EOL) rhscl/mariadb-101-rhel7 rhscl/nginx-18-rhel7 (EOL) rhscl/nodejs-4-rhel7 (EOL) rhscl/postgresql-95-rhel7 rhscl/ror-42-rhel7 rhscl/thermostat-1-agent-rhel7 (EOL) rhscl/varnish-4-rhel7 (EOL) The following container images are based on Red Hat Software Collections 2.0: rhscl/mariadb-100-rhel7 (EOL) rhscl/mongodb-26-rhel7 (EOL) rhscl/mysql-56-rhel7 (EOL) rhscl/nginx-16-rhel7 (EOL) rhscl/passenger-40-rhel7 (EOL) rhscl/perl-520-rhel7 (EOL) rhscl/postgresql-94-rhel7 (EOL) rhscl/python-34-rhel7 (EOL) rhscl/ror-41-rhel7 (EOL) rhscl/ruby-22-rhel7 (EOL) rhscl/s2i-base-rhel7 Images marked as End of Life (EOL) are no longer supported.
[ "~]USD scl enable rh-perl526 'perl hello.pl' Hello, World!", "~]USD scl enable python27 rh-postgresql10 bash", "~]USD echo USDX_SCLS python27 rh-postgresql10", "~]# service rh-postgresql96-postgresql start Starting rh-postgresql96-postgresql service: [ OK ] ~]# chkconfig rh-postgresql96-postgresql on", "~]# systemctl start rh-postgresql10-postgresql.service ~]# systemctl enable rh-postgresql10-postgresql.service", "~]USD scl enable rh-mariadb102 \"man rh-mariadb102\"" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.2_release_notes/chap-Usage
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/deploying_a_high_availability_automation_hub/making-open-source-more-inclusive
Installing Red Hat Developer Hub on OpenShift Container Platform
Installing Red Hat Developer Hub on OpenShift Container Platform Red Hat Developer Hub 1.2 Red Hat Customer Content Services
[ "global: auth: backend: enabled: true clusterRouterBase: apps.<clusterName>.com # other Red Hat Developer Hub Helm Chart configurations", "Loaded config from app-config-from-configmap.yaml, env 2023-07-24T19:44:46.223Z auth info Configuring \"database\" as KeyStore provider type=plugin Backend failed to start up Error: Missing required config value at 'backend.database.client'", "NAMESPACE=<emphasis><rhdh></emphasis> new-project USD{NAMESPACE} || oc project USD{NAMESPACE}", "helm upgrade redhat-developer-hub -i https://github.com/openshift-helm-charts/charts/releases/download/redhat-redhat-developer-hub-1.2.6/redhat-developer-hub-1.2.6.tgz", "PASSWORD=USD(oc get secret redhat-developer-hub-postgresql -o jsonpath=\"{.data.password}\" | base64 -d) CLUSTER_ROUTER_BASE=USD(oc get route console -n openshift-console -o=jsonpath='{.spec.host}' | sed 's/^[^.]*\\.//') helm upgrade redhat-developer-hub -i \"https://github.com/openshift-helm-charts/charts/releases/download/redhat-redhat-developer-hub-1.2.6/redhat-developer-hub-1.2.6.tgz\" --set global.clusterRouterBase=\"USDCLUSTER_ROUTER_BASE\" --set global.postgresql.auth.password=\"USDPASSWORD\"", "echo \"https://redhat-developer-hub-USDNAMESPACE.USDCLUSTER_ROUTER_BASE\"" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html-single/installing_red_hat_developer_hub_on_openshift_container_platform/index
Chapter 33. Unregistering from Red Hat Subscription Management Services
Chapter 33. Unregistering from Red Hat Subscription Management Services A system can only be registered with one subscription service. If you need to change which service your system is registered with or need to delete the registration in general, then the method to unregister depends on which type of subscription service the system was originally registered with. 33.1. Systems Registered with Red Hat Subscription Management Several different subscription services use the same, certificate-based framework to identify systems, installed products, and attached subscriptions. These services are Customer Portal Subscription Management (hosted), Subscription Asset Manager (on-premise subscription service), and CloudForms System Engine (on-premise subscription and content delivery services). These are all part of Red Hat Subscription Management . For all services within Red Hat Subscription Management, the systems are managed with the Red Hat Subscription Manager client tools. To unregister a system registered with a Red Hat Subscription Management server, use the unregister command as root without any additional parameters: For additional information, see Using and Configuring Red Hat Subscription Manager .
[ "subscription-manager unregister" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/chap-subscription-management-unregistering
5.4. Frontend Settings
5.4. Frontend Settings The frontend settings configure the servers' listening sockets for client connection requests. A typical HAProxy configuration of the frontend may look like the following: The frontend called main is configured to the 192.168.0.10 IP address and listening on port 80 using the bind parameter. Once connected, the use backend specifies that all sessions connect to the app back end.
[ "frontend main bind 192.168.0.10:80 default_backend app" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-haproxy-setup-frontend
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 3.5-0 Wed Oct 30 2019 Red Hat Gluster Storage Documentation Team Created and populated release notes for Red Hat Gluster Storage 3.5
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/3.5_release_notes/appe-documentation-3.5_release_notes-revision_history
10.5.49. ReadmeName
10.5.49. ReadmeName ReadmeName names the file which, if it exists in the directory, is appended to the end of server generated directory listings. The Web server first tries to include the file as an HTML document and then tries to include it as plain text. By default, ReadmeName is set to README.html .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-readmename
Chapter 14. Performing latency tests for platform verification
Chapter 14. Performing latency tests for platform verification You can use the Cloud-native Network Functions (CNF) tests image to run latency tests on a CNF-enabled OpenShift Container Platform cluster, where all the components required for running CNF workloads are installed. Run the latency tests to validate node tuning for your workload. The cnf-tests container image is available at registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 . Important The cnf-tests image also includes several tests that are not supported by Red Hat at this time. Only the latency tests are supported by Red Hat. 14.1. Prerequisites for running latency tests Your cluster must meet the following requirements before you can run the latency tests: You have configured a performance profile with the Node Tuning Operator. You have applied all the required CNF configurations in the cluster. You have a pre-existing MachineConfigPool CR applied in the cluster. The default worker pool is worker-cnf . Additional resources For more information about creating the cluster performance profile, see Provisioning a worker with real-time capabilities . 14.2. About discovery mode for latency tests Use discovery mode to validate the functionality of a cluster without altering its configuration. Existing environment configurations are used for the tests. The tests can find the configuration items needed and use those items to execute the tests. If resources needed to run a specific test are not found, the test is skipped, providing an appropriate message to the user. After the tests are finished, no cleanup of the preconfigured configuration items is done, and the test environment can be immediately used for another test run. Important When running the latency tests, always run the tests with -e DISCOVERY_MODE=true and -ginkgo.focus set to the appropriate latency test. If you do not run the latency tests in discovery mode, your existing live cluster performance profile configuration will be modified by the test run. Limiting the nodes used during tests The nodes on which the tests are executed can be limited by specifying a NODES_SELECTOR environment variable, for example, -e NODES_SELECTOR=node-role.kubernetes.io/worker-cnf . Any resources created by the test are limited to nodes with matching labels. Note If you want to override the default worker pool, pass the -e ROLE_WORKER_CNF=<custom_worker_pool> variable to the command specifying an appropriate label. 14.3. Measuring latency The cnf-tests image uses three tools to measure the latency of the system: hwlatdetect cyclictest oslat Each tool has a specific use. Use the tools in sequence to achieve reliable test results. hwlatdetect Measures the baseline that the bare-metal hardware can achieve. Before proceeding with the latency test, ensure that the latency reported by hwlatdetect meets the required threshold because you cannot fix hardware latency spikes by operating system tuning. cyclictest Verifies the real-time kernel scheduler latency after hwlatdetect passes validation. The cyclictest tool schedules a repeated timer and measures the difference between the desired and the actual trigger times. The difference can uncover basic issues with the tuning caused by interrupts or process priorities. The tool must run on a real-time kernel. oslat Behaves similarly to a CPU-intensive DPDK application and measures all the interruptions and disruptions to the busy loop that simulates CPU heavy data processing. The tests introduce the following environment variables: Table 14.1. Latency test environment variables Environment variables Description LATENCY_TEST_DELAY Specifies the amount of time in seconds after which the test starts running. You can use the variable to allow the CPU manager reconcile loop to update the default CPU pool. The default value is 0. LATENCY_TEST_CPUS Specifies the number of CPUs that the pod running the latency tests uses. If you do not set the variable, the default configuration includes all isolated CPUs. LATENCY_TEST_RUNTIME Specifies the amount of time in seconds that the latency test must run. The default value is 300 seconds. HWLATDETECT_MAXIMUM_LATENCY Specifies the maximum acceptable hardware latency in microseconds for the workload and operating system. If you do not set the value of HWLATDETECT_MAXIMUM_LATENCY or MAXIMUM_LATENCY , the tool compares the default expected threshold (20ms) and the actual maximum latency in the tool itself. Then, the test fails or succeeds accordingly. CYCLICTEST_MAXIMUM_LATENCY Specifies the maximum latency in microseconds that all threads expect before waking up during the cyclictest run. If you do not set the value of CYCLICTEST_MAXIMUM_LATENCY or MAXIMUM_LATENCY , the tool skips the comparison of the expected and the actual maximum latency. OSLAT_MAXIMUM_LATENCY Specifies the maximum acceptable latency in microseconds for the oslat test results. If you do not set the value of OSLAT_MAXIMUM_LATENCY or MAXIMUM_LATENCY , the tool skips the comparison of the expected and the actual maximum latency. MAXIMUM_LATENCY Unified variable that specifies the maximum acceptable latency in microseconds. Applicable for all available latency tools. LATENCY_TEST_RUN Boolean parameter that indicates whether the tests should run. LATENCY_TEST_RUN is set to false by default. To run the latency tests, set this value to true . Note Variables that are specific to a latency tool take precedence over unified variables. For example, if OSLAT_MAXIMUM_LATENCY is set to 30 microseconds and MAXIMUM_LATENCY is set to 10 microseconds, the oslat test will run with maximum acceptable latency of 30 microseconds. 14.4. Running the latency tests Run the cluster latency tests to validate node tuning for your Cloud-native Network Functions (CNF) workload. Important Always run the latency tests with DISCOVERY_MODE=true set. If you don't, the test suite will make changes to the running cluster configuration. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. To make the podman command work, append :Z to the volumes creation; for example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Procedure Open a shell prompt in the directory containing the kubeconfig file. You provide the test image with a kubeconfig file in current directory and its related USDKUBECONFIG environment variable, mounted through a volume. This allows the running container to use the kubeconfig file from inside the container. Run the latency tests by entering the following command: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 \ /usr/bin/test-run.sh -ginkgo.focus="\[performance\]\ Latency\ Test" Optional: Append -ginkgo.dryRun to run the latency tests in dry-run mode. This is useful for checking what the tests run. Optional: Append -ginkgo.v to run the tests with increased verbosity. Optional: To run the latency tests against a specific performance profile, run the following command, substituting appropriate values: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUN=true -e FEATURES=performance -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ -e PERF_TEST_PROFILE=<performance_profile> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 \ /usr/bin/test-run.sh -ginkgo.focus="[performance]\ Latency\ Test" where: <performance_profile> Is the name of the performance profile you want to run the latency tests against. Important For valid latency test results, run the tests for at least 12 hours. 14.4.1. Running hwlatdetect The hwlatdetect tool is available in the rt-kernel package with a regular subscription of Red Hat Enterprise Linux (RHEL) 9.x. Important Always run the latency tests with DISCOVERY_MODE=true set. If you don't, the test suite will make changes to the running cluster configuration. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. To make the podman command work, append :Z to the volumes creation; for example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have installed the real-time kernel in the cluster. You have logged in to registry.redhat.io with your Customer Portal credentials. Procedure To run the hwlatdetect tests, run the following command, substituting variable values as appropriate: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf \ -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 \ /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus="hwlatdetect" The hwlatdetect test runs for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (20 ms). If the results exceed the latency threshold, the test fails. Important For valid results, the test should run for at least 12 hours. Example failure output running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=hwlatdetect I0908 15:25:20.023712 27 request.go:601] Waited for 1.046586367s due to client-side throttling, not priority and fairness, request: GET:https://api.hlxcl6.lab.eng.tlv2.redhat.com:6443/apis/imageregistry.operator.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662650718 Will run 1 of 194 specs [...] • Failure [283.574 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the hwlatdetect image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:228 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:236 Log file created at: 2022/09/08 15:25:27 Running on machine: hwlatdetect-b6n4n Binary: Built with gc go1.17.12 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0908 15:25:27.160620 1 node.go:39] Environment information: /proc/cmdline: BOOT_IMAGE=(hd1,gpt3)/ostree/rhcos-c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/vmlinuz-4.18.0-372.19.1.el8_6.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/0 ip=dhcp root=UUID=5f80c283-f6e6-4a27-9b47-a287157483b2 rw rootflags=prjquota boot=UUID=773bf59a-bafd-48fc-9a87-f62252d739d3 skew_tick=1 nohz=on rcu_nocbs=0-3 tuned.non_isolcpus=0000ffff,ffffffff,fffffff0 systemd.cpu_affinity=4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79 intel_iommu=on iommu=pt isolcpus=managed_irq,0-3 nohz_full=0-3 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off skew_tick=1 rcutree.kthread_prio=11 + + I0908 15:25:27.160830 1 node.go:46] Environment information: kernel version 4.18.0-372.19.1.el8_6.x86_64 I0908 15:25:27.160857 1 main.go:50] running the hwlatdetect command with arguments [/usr/bin/hwlatdetect --threshold 1 --hardlimit 1 --duration 100 --window 10000000us --width 950000us] F0908 15:27:10.603523 1 main.go:53] failed to run hwlatdetect command; out: hwlatdetect: test duration 100 seconds detector: tracer parameters: Latency threshold: 1us 1 Sample window: 10000000us Sample width: 950000us Non-sampling period: 9050000us Output File: None Starting test test finished Max Latency: 326us 2 Samples recorded: 5 Samples exceeding threshold: 5 ts: 1662650739.017274507, inner:6, outer:6 ts: 1662650749.257272414, inner:14, outer:326 ts: 1662650779.977272835, inner:314, outer:12 ts: 1662650800.457272384, inner:3, outer:9 ts: 1662650810.697273520, inner:3, outer:2 [...] JUnit report was created: /junit.xml/cnftests-junit.xml Summarizing 1 Failure: [Fail] [performance] Latency Test with the hwlatdetect image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:476 Ran 1 of 194 Specs in 365.797 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (366.08s) FAIL 1 You can configure the latency threshold by using the MAXIMUM_LATENCY or the HWLATDETECT_MAXIMUM_LATENCY environment variables. 2 The maximum latency value measured during the test. Example hwlatdetect test results You can capture the following types of results: Rough results that are gathered after each run to create a history of impact on any changes made throughout the test. The combined set of the rough tests with the best results and configuration settings. Example of good results hwlatdetect: test duration 3600 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 950000us Non-sampling period: 50000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0 The hwlatdetect tool only provides output if the sample exceeds the specified threshold. Example of bad results hwlatdetect: test duration 3600 seconds detector: tracer parameters:Latency threshold: 10usSample window: 1000000us Sample width: 950000usNon-sampling period: 50000usOutput File: None Starting tests:1610542421.275784439, inner:78, outer:81 ts: 1610542444.330561619, inner:27, outer:28 ts: 1610542445.332549975, inner:39, outer:38 ts: 1610542541.568546097, inner:47, outer:32 ts: 1610542590.681548531, inner:13, outer:17 ts: 1610543033.818801482, inner:29, outer:30 ts: 1610543080.938801990, inner:90, outer:76 ts: 1610543129.065549639, inner:28, outer:39 ts: 1610543474.859552115, inner:28, outer:35 ts: 1610543523.973856571, inner:52, outer:49 ts: 1610543572.089799738, inner:27, outer:30 ts: 1610543573.091550771, inner:34, outer:28 ts: 1610543574.093555202, inner:116, outer:63 The output of hwlatdetect shows that multiple samples exceed the threshold. However, the same output can indicate different results based on the following factors: The duration of the test The number of CPU cores The host firmware settings Warning Before proceeding with the latency test, ensure that the latency reported by hwlatdetect meets the required threshold. Fixing latencies introduced by hardware might require you to contact the system vendor support. Not all latency spikes are hardware related. Ensure that you tune the host firmware to meet your workload requirements. For more information, see Setting firmware parameters for system tuning . 14.4.2. Running cyclictest The cyclictest tool measures the real-time kernel scheduler latency on the specified CPUs. Important Always run the latency tests with DISCOVERY_MODE=true set. If you don't, the test suite will make changes to the running cluster configuration. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. To make the podman command work, append :Z to the volumes creation; for example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have logged in to registry.redhat.io with your Customer Portal credentials. You have installed the real-time kernel in the cluster. You have applied a cluster performance profile by using Node Tuning Operator. Procedure To perform the cyclictest , run the following command, substituting variable values as appropriate: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf \ -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 \ /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus="cyclictest" The command runs the cyclictest tool for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (in this example, 20 ms). Latency spikes of 20 ms and above are generally not acceptable for telco RAN workloads. If the results exceed the latency threshold, the test fails. Important For valid results, the test should run for at least 12 hours. Example failure output running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=cyclictest I0908 13:01:59.193776 27 request.go:601] Waited for 1.046228824s due to client-side throttling, not priority and fairness, request: GET:https://api.compute-1.example.com:6443/apis/packages.operators.coreos.com/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662642118 Will run 1 of 194 specs [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the cyclictest image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:220 Ran 1 of 194 Specs in 161.151 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (161.48s) FAIL Example cyclictest results The same output can indicate different results for different workloads. For example, spikes up to 18ms are acceptable for 4G DU workloads, but not for 5G DU workloads. Example of good results running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m # Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 579506 535967 418614 573648 532870 529897 489306 558076 582350 585188 583793 223781 532480 569130 472250 576043 More histogram entries ... # Total: 000600000 000600000 000600000 000599999 000599999 000599999 000599998 000599998 000599998 000599997 000599997 000599996 000599996 000599995 000599995 000599995 # Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Max Latencies: 00005 00005 00004 00005 00004 00004 00005 00005 00006 00005 00004 00005 00004 00004 00005 00004 # Histogram Overflows: 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 # Histogram Overflow at cycle number: # Thread 0: # Thread 1: # Thread 2: # Thread 3: # Thread 4: # Thread 5: # Thread 6: # Thread 7: # Thread 8: # Thread 9: # Thread 10: # Thread 11: # Thread 12: # Thread 13: # Thread 14: # Thread 15: Example of bad results running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m # Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 564632 579686 354911 563036 492543 521983 515884 378266 592621 463547 482764 591976 590409 588145 589556 353518 More histogram entries ... # Total: 000599999 000599999 000599999 000599997 000599997 000599998 000599998 000599997 000599997 000599996 000599995 000599996 000599995 000599995 000599995 000599993 # Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Max Latencies: 00493 00387 00271 00619 00541 00513 00009 00389 00252 00215 00539 00498 00363 00204 00068 00520 # Histogram Overflows: 00001 00001 00001 00002 00002 00001 00000 00001 00001 00001 00002 00001 00001 00001 00001 00002 # Histogram Overflow at cycle number: # Thread 0: 155922 # Thread 1: 110064 # Thread 2: 110064 # Thread 3: 110063 155921 # Thread 4: 110063 155921 # Thread 5: 155920 # Thread 6: # Thread 7: 110062 # Thread 8: 110062 # Thread 9: 155919 # Thread 10: 110061 155919 # Thread 11: 155918 # Thread 12: 155918 # Thread 13: 110060 # Thread 14: 110060 # Thread 15: 110059 155917 14.4.3. Running oslat The oslat test simulates a CPU-intensive DPDK application and measures all the interruptions and disruptions to test how the cluster handles CPU heavy data processing. Important Always run the latency tests with DISCOVERY_MODE=true set. If you don't, the test suite will make changes to the running cluster configuration. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. To make the podman command work, append :Z to the volumes creation; for example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have logged in to registry.redhat.io with your Customer Portal credentials. You have applied a cluster performance profile by using the Node Tuning Operator. Procedure To perform the oslat test, run the following command, substituting variable values as appropriate: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf \ -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 \ /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus="oslat" LATENCY_TEST_CPUS specifies the list of CPUs to test with the oslat command. The command runs the oslat tool for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (20 ms). If the results exceed the latency threshold, the test fails. Important For valid results, the test should run for at least 12 hours. Example failure output running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=oslat I0908 12:51:55.999393 27 request.go:601] Waited for 1.044848101s due to client-side throttling, not priority and fairness, request: GET:https://compute-1.example.com:6443/apis/machineconfiguration.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662641514 Will run 1 of 194 specs [...] • Failure [77.833 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the oslat image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:128 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:153 The current latency 304 is bigger than the expected one 1 : 1 [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the oslat image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:177 Ran 1 of 194 Specs in 161.091 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (161.42s) FAIL 1 In this example, the measured latency is outside the maximum allowed value. 14.5. Generating a latency test failure report Use the following procedures to generate a JUnit latency test output and test failure report. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a test failure report with information about the cluster state and resources for troubleshooting by passing the --report parameter with the path to where the report is dumped: USD podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/reportdest:<report_folder_path> \ -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 \ /usr/bin/test-run.sh --report <report_folder_path> \ -ginkgo.focus="\[performance\]\ Latency\ Test" where: <report_folder_path> Is the path to the folder where the report is generated. 14.6. Generating a JUnit latency test report Use the following procedures to generate a JUnit latency test output and test failure report. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a JUnit-compliant XML report by passing the --junit parameter together with the path to where the report is dumped: USD podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/junitdest:<junit_folder_path> \ -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 \ /usr/bin/test-run.sh --junit <junit_folder_path> \ -ginkgo.focus="\[performance\]\ Latency\ Test" where: <junit_folder_path> Is the path to the folder where the junit report is generated 14.7. Running latency tests on a single-node OpenShift cluster You can run latency tests on single-node OpenShift clusters. Important Always run the latency tests with DISCOVERY_MODE=true set. If you don't, the test suite will make changes to the running cluster configuration. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. To make the podman command work, append :Z to the volumes creation; for example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure To run the latency tests on a single-node OpenShift cluster, run the following command: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=master \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 \ /usr/bin/test-run.sh -ginkgo.focus="\[performance\]\ Latency\ Test" Note ROLE_WORKER_CNF=master is required because master is the only machine pool to which the node belongs. For more information about setting the required MachineConfigPool for the latency tests, see "Prerequisites for running latency tests". After running the test suite, all the dangling resources are cleaned up. 14.8. Running latency tests in a disconnected cluster The CNF tests image can run tests in a disconnected cluster that is not able to reach external registries. This requires two steps: Mirroring the cnf-tests image to the custom disconnected registry. Instructing the tests to consume the images from the custom disconnected registry. Mirroring the images to a custom registry accessible from the cluster A mirror executable is shipped in the image to provide the input required by oc to mirror the test image to a local registry. Run this command from an intermediate machine that has access to the cluster and registry.redhat.io : USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 \ /usr/bin/mirror -registry <disconnected_registry> | oc image mirror -f - where: <disconnected_registry> Is the disconnected mirror registry you have configured, for example, my.local.registry:5000/ . When you have mirrored the cnf-tests image into the disconnected registry, you must override the original registry used to fetch the images when running the tests, for example: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e DISCOVERY_MODE=true -e FEATURES=performance -e IMAGE_REGISTRY="<disconnected_registry>" \ -e CNF_TESTS_IMAGE="cnf-tests-rhel8:v4.13" \ /usr/bin/test-run.sh -ginkgo.focus="\[performance\]\ Latency\ Test" Configuring the tests to consume images from a custom registry You can run the latency tests using a custom test image and image registry using CNF_TESTS_IMAGE and IMAGE_REGISTRY variables. To configure the latency tests to use a custom test image and image registry, run the following command: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e IMAGE_REGISTRY="<custom_image_registry>" \ -e CNF_TESTS_IMAGE="<custom_cnf-tests_image>" \ -e FEATURES=performance \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 /usr/bin/test-run.sh where: <custom_image_registry> is the custom image registry, for example, custom.registry:5000/ . <custom_cnf-tests_image> is the custom cnf-tests image, for example, custom-cnf-tests-image:latest . Mirroring images to the cluster OpenShift image registry OpenShift Container Platform provides a built-in container image registry, which runs as a standard workload on the cluster. Procedure Gain external access to the registry by exposing it with a route: USD oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge Fetch the registry endpoint by running the following command: USD REGISTRY=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') Create a namespace for exposing the images: USD oc create ns cnftests Make the image stream available to all the namespaces used for tests. This is required to allow the tests namespaces to fetch the images from the cnf-tests image stream. Run the following commands: USD oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests USD oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests Retrieve the docker secret name and auth token by running the following commands: USD SECRET=USD(oc -n cnftests get secret | grep builder-docker | awk {'print USD1'} USD TOKEN=USD(oc -n cnftests get secret USDSECRET -o jsonpath="{.data['\.dockercfg']}" | base64 --decode | jq '.["image-registry.openshift-image-registry.svc:5000"].auth') Create a dockerauth.json file, for example: USD echo "{\"auths\": { \"USDREGISTRY\": { \"auth\": USDTOKEN } }}" > dockerauth.json Do the image mirroring: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:4.13 \ /usr/bin/mirror -registry USDREGISTRY/cnftests | oc image mirror --insecure=true \ -a=USD(pwd)/dockerauth.json -f - Run the tests: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e DISCOVERY_MODE=true -e FEATURES=performance -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests \ cnf-tests-local:latest /usr/bin/test-run.sh -ginkgo.focus="\[performance\]\ Latency\ Test" Mirroring a different set of test images You can optionally change the default upstream images that are mirrored for the latency tests. Procedure The mirror command tries to mirror the upstream images by default. This can be overridden by passing a file with the following format to the image: [ { "registry": "public.registry.io:5000", "image": "imageforcnftests:4.13" } ] Pass the file to the mirror command, for example saving it locally as images.json . With the following command, the local path is mounted in /kubeconfig inside the container and that can be passed to the mirror command. USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 /usr/bin/mirror \ --registry "my.local.registry:5000/" --images "/kubeconfig/images.json" \ | oc image mirror -f - 14.9. Troubleshooting errors with the cnf-tests container To run latency tests, the cluster must be accessible from within the cnf-tests container. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Verify that the cluster is accessible from inside the cnf-tests container by running the following command: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 \ oc get nodes If this command does not work, an error related to spanning across DNS, MTU size, or firewall access might be occurring.
[ "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e FEATURES=performance -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 -e PERF_TEST_PROFILE=<performance_profile> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 /usr/bin/test-run.sh -ginkgo.focus=\"[performance]\\ Latency\\ Test\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"hwlatdetect\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=hwlatdetect I0908 15:25:20.023712 27 request.go:601] Waited for 1.046586367s due to client-side throttling, not priority and fairness, request: GET:https://api.hlxcl6.lab.eng.tlv2.redhat.com:6443/apis/imageregistry.operator.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662650718 Will run 1 of 194 specs [...] • Failure [283.574 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the hwlatdetect image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:228 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:236 Log file created at: 2022/09/08 15:25:27 Running on machine: hwlatdetect-b6n4n Binary: Built with gc go1.17.12 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0908 15:25:27.160620 1 node.go:39] Environment information: /proc/cmdline: BOOT_IMAGE=(hd1,gpt3)/ostree/rhcos-c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/vmlinuz-4.18.0-372.19.1.el8_6.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/0 ip=dhcp root=UUID=5f80c283-f6e6-4a27-9b47-a287157483b2 rw rootflags=prjquota boot=UUID=773bf59a-bafd-48fc-9a87-f62252d739d3 skew_tick=1 nohz=on rcu_nocbs=0-3 tuned.non_isolcpus=0000ffff,ffffffff,fffffff0 systemd.cpu_affinity=4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79 intel_iommu=on iommu=pt isolcpus=managed_irq,0-3 nohz_full=0-3 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off skew_tick=1 rcutree.kthread_prio=11 + + I0908 15:25:27.160830 1 node.go:46] Environment information: kernel version 4.18.0-372.19.1.el8_6.x86_64 I0908 15:25:27.160857 1 main.go:50] running the hwlatdetect command with arguments [/usr/bin/hwlatdetect --threshold 1 --hardlimit 1 --duration 100 --window 10000000us --width 950000us] F0908 15:27:10.603523 1 main.go:53] failed to run hwlatdetect command; out: hwlatdetect: test duration 100 seconds detector: tracer parameters: Latency threshold: 1us 1 Sample window: 10000000us Sample width: 950000us Non-sampling period: 9050000us Output File: None Starting test test finished Max Latency: 326us 2 Samples recorded: 5 Samples exceeding threshold: 5 ts: 1662650739.017274507, inner:6, outer:6 ts: 1662650749.257272414, inner:14, outer:326 ts: 1662650779.977272835, inner:314, outer:12 ts: 1662650800.457272384, inner:3, outer:9 ts: 1662650810.697273520, inner:3, outer:2 [...] JUnit report was created: /junit.xml/cnftests-junit.xml Summarizing 1 Failure: [Fail] [performance] Latency Test with the hwlatdetect image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:476 Ran 1 of 194 Specs in 365.797 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (366.08s) FAIL", "hwlatdetect: test duration 3600 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 950000us Non-sampling period: 50000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0", "hwlatdetect: test duration 3600 seconds detector: tracer parameters:Latency threshold: 10usSample window: 1000000us Sample width: 950000usNon-sampling period: 50000usOutput File: None Starting tests:1610542421.275784439, inner:78, outer:81 ts: 1610542444.330561619, inner:27, outer:28 ts: 1610542445.332549975, inner:39, outer:38 ts: 1610542541.568546097, inner:47, outer:32 ts: 1610542590.681548531, inner:13, outer:17 ts: 1610543033.818801482, inner:29, outer:30 ts: 1610543080.938801990, inner:90, outer:76 ts: 1610543129.065549639, inner:28, outer:39 ts: 1610543474.859552115, inner:28, outer:35 ts: 1610543523.973856571, inner:52, outer:49 ts: 1610543572.089799738, inner:27, outer:30 ts: 1610543573.091550771, inner:34, outer:28 ts: 1610543574.093555202, inner:116, outer:63", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"cyclictest\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=cyclictest I0908 13:01:59.193776 27 request.go:601] Waited for 1.046228824s due to client-side throttling, not priority and fairness, request: GET:https://api.compute-1.example.com:6443/apis/packages.operators.coreos.com/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662642118 Will run 1 of 194 specs [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the cyclictest image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:220 Ran 1 of 194 Specs in 161.151 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (161.48s) FAIL", "running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 579506 535967 418614 573648 532870 529897 489306 558076 582350 585188 583793 223781 532480 569130 472250 576043 More histogram entries Total: 000600000 000600000 000600000 000599999 000599999 000599999 000599998 000599998 000599998 000599997 000599997 000599996 000599996 000599995 000599995 000599995 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00005 00005 00004 00005 00004 00004 00005 00005 00006 00005 00004 00005 00004 00004 00005 00004 Histogram Overflows: 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 Histogram Overflow at cycle number: Thread 0: Thread 1: Thread 2: Thread 3: Thread 4: Thread 5: Thread 6: Thread 7: Thread 8: Thread 9: Thread 10: Thread 11: Thread 12: Thread 13: Thread 14: Thread 15:", "running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 564632 579686 354911 563036 492543 521983 515884 378266 592621 463547 482764 591976 590409 588145 589556 353518 More histogram entries Total: 000599999 000599999 000599999 000599997 000599997 000599998 000599998 000599997 000599997 000599996 000599995 000599996 000599995 000599995 000599995 000599993 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00493 00387 00271 00619 00541 00513 00009 00389 00252 00215 00539 00498 00363 00204 00068 00520 Histogram Overflows: 00001 00001 00001 00002 00002 00001 00000 00001 00001 00001 00002 00001 00001 00001 00001 00002 Histogram Overflow at cycle number: Thread 0: 155922 Thread 1: 110064 Thread 2: 110064 Thread 3: 110063 155921 Thread 4: 110063 155921 Thread 5: 155920 Thread 6: Thread 7: 110062 Thread 8: 110062 Thread 9: 155919 Thread 10: 110061 155919 Thread 11: 155918 Thread 12: 155918 Thread 13: 110060 Thread 14: 110060 Thread 15: 110059 155917", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"oslat\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=oslat I0908 12:51:55.999393 27 request.go:601] Waited for 1.044848101s due to client-side throttling, not priority and fairness, request: GET:https://compute-1.example.com:6443/apis/machineconfiguration.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662641514 Will run 1 of 194 specs [...] • Failure [77.833 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the oslat image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:128 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:153 The current latency 304 is bigger than the expected one 1 : 1 [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the oslat image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:177 Ran 1 of 194 Specs in 161.091 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (161.42s) FAIL", "podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/reportdest:<report_folder_path> -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 /usr/bin/test-run.sh --report <report_folder_path> -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"", "podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/junitdest:<junit_folder_path> -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 /usr/bin/test-run.sh --junit <junit_folder_path> -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=master registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 /usr/bin/mirror -registry <disconnected_registry> | oc image mirror -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance -e IMAGE_REGISTRY=\"<disconnected_registry>\" -e CNF_TESTS_IMAGE=\"cnf-tests-rhel8:v4.13\" /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY=\"<custom_image_registry>\" -e CNF_TESTS_IMAGE=\"<custom_cnf-tests_image>\" -e FEATURES=performance registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 /usr/bin/test-run.sh", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "REGISTRY=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "oc create ns cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests", "SECRET=USD(oc -n cnftests get secret | grep builder-docker | awk {'print USD1'}", "TOKEN=USD(oc -n cnftests get secret USDSECRET -o jsonpath=\"{.data['\\.dockercfg']}\" | base64 --decode | jq '.[\"image-registry.openshift-image-registry.svc:5000\"].auth')", "echo \"{\\\"auths\\\": { \\\"USDREGISTRY\\\": { \\\"auth\\\": USDTOKEN } }}\" > dockerauth.json", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:4.13 /usr/bin/mirror -registry USDREGISTRY/cnftests | oc image mirror --insecure=true -a=USD(pwd)/dockerauth.json -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests cnf-tests-local:latest /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"", "[ { \"registry\": \"public.registry.io:5000\", \"image\": \"imageforcnftests:4.13\" } ]", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 /usr/bin/mirror --registry \"my.local.registry:5000/\" --images \"/kubeconfig/images.json\" | oc image mirror -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13 get nodes" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/scalability_and_performance/cnf-performing-platform-verification-latency-tests
1.10.2. GLOBAL SETTINGS
1.10.2. GLOBAL SETTINGS The GLOBAL SETTINGS panel is where the LVS administrator defines the networking details for the primary LVS router's public and private network interfaces. Figure 1.32. The GLOBAL SETTINGS Panel The top half of this panel sets up the primary LVS router's public and private network interfaces. Primary server public IP The publicly routable real IP address for the primary LVS node. Primary server private IP The real IP address for an alternative network interface on the primary LVS node. This address is used solely as an alternative heartbeat channel for the backup router. Use network type Selects select NAT routing. The three fields are specifically for the NAT router's virtual network interface connected the private network with the real servers. NAT Router IP The private floating IP in this text field. This floating IP should be used as the gateway for the real servers. NAT Router netmask If the NAT router's floating IP needs a particular netmask, select it from drop-down list. NAT Router device Defines the device name of the network interface for the floating IP address, such as eth1:1 .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s2-piranha-globalset-CSO
Chapter 14. Signal Tapset
Chapter 14. Signal Tapset This family of probe points is used to probe signal activities. It contains the following probe points:
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/signal.stp
Chapter 11. Lifecyle bucket configuration in Multicloud Object Gateway
Chapter 11. Lifecyle bucket configuration in Multicloud Object Gateway Multicloud Object Gateway (MCG) lifecycle provides a way to reduce storage costs due to accumulated data objects. Deletion of expired objects is a simplified way that enables handling of unused data. Data expiration is a part of Amazon Web Services (AWS) lifecycle management and sets an expiration date for automatic deletion. The minimal time resolution of the lifecycle expiration is one day. For more information, see Expiring objects . AWS S3 API is used to configure lifecyle bucket in MCG. For information about the data bucket APIs and their support level, see Support of Multicloud Object Gateway data bucket APIs . There are a few limitations with the expiratation rule API for MCG in comaparison with AWS: ExpiredObjectDeleteMarker is accepted but it is not processed. No option to define specific non-current version's expiration conditions
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/managing_hybrid_and_multicloud_resources/con_lifecycle-bucket-configuration-in-multicloud-object-gateway_rhodf
Chapter 1. Authorization APIs
Chapter 1. Authorization APIs 1.1. LocalResourceAccessReview [authorization.openshift.io/v1] Description LocalResourceAccessReview is a means to request a list of which users and groups are authorized to perform the action specified by spec in a particular namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. LocalSubjectAccessReview [authorization.openshift.io/v1] Description LocalSubjectAccessReview is an object for requesting information about whether a user or group can perform an action in a particular namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. ResourceAccessReview [authorization.openshift.io/v1] Description ResourceAccessReview is a means to request a list of which users and groups are authorized to perform the action specified by spec Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. SelfSubjectRulesReview [authorization.openshift.io/v1] Description SelfSubjectRulesReview is a resource you can create to determine which actions you can perform in a namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. SubjectAccessReview [authorization.openshift.io/v1] Description SubjectAccessReview is an object for requesting information about whether a user or group can perform an action Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. SubjectRulesReview [authorization.openshift.io/v1] Description SubjectRulesReview is a resource you can create to determine which actions another user can perform in a namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.7. TokenRequest [authentication.k8s.io/v1] Description TokenRequest requests a token for a given service account. Type object 1.8. TokenReview [authentication.k8s.io/v1] Description TokenReview attempts to authenticate a token to a known user. Note: TokenReview requests may be cached by the webhook token authenticator plugin in the kube-apiserver. Type object 1.9. LocalSubjectAccessReview [authorization.k8s.io/v1] Description LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace. Having a namespace scoped resource makes it much easier to grant namespace scoped policy that includes permissions checking. Type object 1.10. SelfSubjectAccessReview [authorization.k8s.io/v1] Description SelfSubjectAccessReview checks whether or the current user can perform an action. Not filling in a spec.namespace means "in all namespaces". Self is a special case, because users should always be able to check whether they can perform an action Type object 1.11. SelfSubjectRulesReview [authorization.k8s.io/v1] Description SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace. The returned list of actions may be incomplete depending on the server's authorization mode, and any errors experienced during the evaluation. SelfSubjectRulesReview should be used by UIs to show/hide actions, or to quickly let an end user reason about their permissions. It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy, cache lifetime/revocation, and correctness concerns. SubjectAccessReview, and LocalAccessReview are the correct way to defer authorization decisions to the API server. Type object 1.12. SubjectAccessReview [authorization.k8s.io/v1] Description SubjectAccessReview checks whether or not a user or group can perform an action. Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/authorization_apis/authorization-apis
15.5.2. Useful Websites
15.5.2. Useful Websites http://www.rpm.org/ - The RPM website. http://www.redhat.com/mailman/listinfo/rpm-list/ - The RPM mailing list is archived here. To subscribe, send mail to [email protected] with the word subscribe in the subject line.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Package_Management_with_RPM-Additional_Resources-Useful_Websites
Chapter 14. Installing on IBM Power
Chapter 14. Installing on IBM Power 14.1. Preparing to install on IBM Power 14.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 14.1.2. Choosing a method to install OpenShift Container Platform on IBM Power You can install a cluster on IBM Power infrastructure that you provision, by using one of the following methods: Installing a cluster on IBM Power : You can install OpenShift Container Platform on IBM Power infrastructure that you provision. Installing a cluster on IBM Power in a restricted network : You can install OpenShift Container Platform on IBM Power infrastructure that you provision in a restricted or disconnected network, by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 14.2. Installing a cluster on IBM Power In OpenShift Container Platform version 4.9, you can install a cluster on IBM Power infrastructure that you provision. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 14.2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using NFS for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 14.2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 14.2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 14.2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 14.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 7.9, or RHEL 8.4. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 14.2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 14.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and planned for removal in a future release of OpenShift Container Platform 4. 14.2.3.3. Minimum IBM Power requirements You can install OpenShift Container Platform version 4.9 on the following IBM hardware: IBM Power8, Power9, or Power10 processor-based systems Hardware requirements Six IBM Power bare metal servers or six LPARs across multiple PowerVM servers Operating system requirements One instance of an IBM Power8, Power9, or Power10 processor-based system On your IBM Power instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Virtualized by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM vNIC Storage / main memory 100 GB / 16 GB for OpenShift Container Platform control plane machines 100 GB / 8 GB for OpenShift Container Platform compute machines 100 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 14.2.3.4. Recommended IBM Power system requirements Hardware requirements Six IBM Power bare metal servers or six LPARs across multiple PowerVM servers Operating system requirements One instance of an IBM Power8, Power9, or Power10 processor-based system On your IBM Power instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Virtualized by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM vNIC Storage / main memory 120 GB / 32 GB for OpenShift Container Platform control plane machines 120 GB / 32 GB for OpenShift Container Platform compute machines 120 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 14.2.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 14.2.3.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 14.2.3.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 14.2.3.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 14.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 14.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 14.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 14.2.3.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 14.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 14.2.3.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 14.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 14.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 14.2.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Note Session persistence is not required for the API load balancer to function properly. Configure the following ports on both the front and back of the load balancers: Table 14.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 14.8. Application ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic 1936 The worker nodes that run the Ingress Controller pods, by default. You must configure the /healthz/ready endpoint for the ingress health check probe. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. 14.2.3.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Example 14.3. Sample API and application ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 In the example, the cluster name is ocp4 . 2 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 3 5 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 4 Port 22623 handles the machine config server traffic and points to the control plane machines. 6 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 7 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . 14.2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 14.2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 0 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 0 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 0 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 0 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 14.2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 14.2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 14.2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 14.2.9. Manually creating the installation configuration file For user-provisioned installations of OpenShift Container Platform, you manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 14.2.9.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 14.2.9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 14.9. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 14.2.9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 14.10. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 14.2.9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 14.11. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 14.2.9.2. Sample install-config.yaml file for IBM Power You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture : ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{"auths": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 12 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Power infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 14 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 14.2.9.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 14.2.9.4. Configuring a three-node cluster You can optionally deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 14.2.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 14.2.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 14.12. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 14.13. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 14.14. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 14.15. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . This value cannot be changed after cluster installation. genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. Table 14.16. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Example OVN-Kubernetes configuration defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 14.17. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 14.2.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on ppc64le only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 14.2.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Power infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Follow either the steps to use an ISO image or network PXE booting to install RHCOS on the machines. 14.2.12.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 14.2.12.1.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 14.2.12.1.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following table provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=name[:network_interfaces][:options] name is the bonding device name ( bond0 ), network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 14.2.12.2. Installing RHCOS by using PXE booting You can use PXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.9-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE installation for the RHCOS images and begin the installation. Modify the following example menu entry for your environment and verify that the image and Ignition files are properly accessible: 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 14.2.12.3. Enabling multipathing with kernel arguments on RHCOS In OpenShift Container Platform 4.9 or later, during installation, you can enable multipathing for provisioned nodes. RHCOS supports multipathing on the primary disk. Multipathing provides added benefits of stronger resilience to hardware failure to achieve higher host availability. During the initial cluster creation, you might want to add kernel arguments to all master or worker nodes. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. Create a machine config file. For example, create a 99-master-kargs-mpath.yaml that instructs the cluster to add the master label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' To enable multipathing post-installation on worker nodes: Create a machine config file. For example, create a 99-worker-kargs-mpath.yaml that instructs the cluster to add the worker label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' You can now continue on to create the cluster. Important Additional post-installation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Post-installation machine configuration tasks . In case of MPIO failure, use the bootlist command to update the boot device list with alternate logical device names. The command displays a boot list and it designates the possible boot devices for when the system is booted in normal mode. To display a boot list and specify the possible boot devices if the system is booted in normal mode, enter the following command: USD bootlist -m normal -o sda To update the boot list for normal mode and add alternate device names, enter the following command: USD bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde If the original boot disk path is down, the node reboots from the alternate device registered in the normal boot device list. 14.2.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 14.2.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 14.2.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 14.2.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Configure the Operators that are not available. 14.2.16.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 14.2.16.1.1. Configuring registry storage for IBM Power As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Power. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 14.2.16.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 14.2.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. Additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. 14.2.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 14.2.19. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting . 14.3. Installing a cluster on IBM Power in a restricted network In OpenShift Container Platform version 4.9, you can install a cluster on IBM Power infrastructure that you provision in a restricted network. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 14.3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a mirror registry for installation in a restricted network and obtained the imageContentSources data for your version of OpenShift Container Platform. Before you begin the installation process, you must move or remove any existing installation files. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are performed on a machine with access to the installation media. You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 14.3.2. About installations in restricted networks In OpenShift Container Platform 4.9, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 14.3.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 14.3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 14.3.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 14.3.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 14.18. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 7.9, or RHEL 8.4. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 14.3.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 14.19. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and planned for removal in a future release of OpenShift Container Platform 4. 14.3.4.3. Minimum IBM Power requirements You can install OpenShift Container Platform version 4.9 on the following IBM hardware: IBM Power8, Power9, or Power10 processor-based systems Hardware requirements Six IBM Power bare metal servers or six LPARs across multiple PowerVM servers Operating system requirements One instance of an IBM Power8, Power9, or Power10 processor-based system On your IBM Power instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Virtualized by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM vNIC Storage / main memory 100 GB / 16 GB for OpenShift Container Platform control plane machines 100 GB / 8 GB for OpenShift Container Platform compute machines 100 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 14.3.4.4. Recommended IBM Power system requirements Hardware requirements Six IBM Power bare metal servers or six LPARs across multiple PowerVM servers Operating system requirements One instance of an IBM Power8, Power9, or Power10 processor-based system On your IBM Power instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Virtualized by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM vNIC Storage / main memory 120 GB / 32 GB for OpenShift Container Platform control plane machines 120 GB / 32 GB for OpenShift Container Platform compute machines 120 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 14.3.4.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 14.3.4.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 14.3.4.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 14.3.4.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 14.20. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 14.21. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 14.22. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 14.3.4.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 14.23. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 14.3.4.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 14.4. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 14.5. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 14.3.4.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Note Session persistence is not required for the API load balancer to function properly. Configure the following ports on both the front and back of the load balancers: Table 14.24. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 14.25. Application ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic 1936 The worker nodes that run the Ingress Controller pods, by default. You must configure the /healthz/ready endpoint for the ingress health check probe. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. 14.3.4.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Example 14.6. Sample API and application ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 In the example, the cluster name is ocp4 . 2 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 3 5 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 4 Port 22623 handles the machine config server traffic and points to the control plane machines. 6 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 7 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . 14.3.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 14.3.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 0 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 0 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 0 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 0 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 14.3.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 14.3.8. Manually creating the installation configuration file For user-provisioned installations of OpenShift Container Platform, you manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 14.3.8.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 14.3.8.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 14.26. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 14.3.8.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 14.27. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 14.3.8.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 14.28. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 14.3.8.2. Sample install-config.yaml file for IBM Power You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture : ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 14 sshKey: 'ssh-ed25519 AAAA...' 15 additionalTrustBundle: | 16 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 17 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 12 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Power infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 14 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 15 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 16 Provide the contents of the certificate file that you used for your mirror registry. 17 Provide the imageContentSources section from the output of the command to mirror the repository. 14.3.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 14.3.8.4. Configuring a three-node cluster You can optionally deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 14.3.9. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 14.3.9.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 14.29. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 14.30. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 14.31. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 14.32. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . This value cannot be changed after cluster installation. genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. Table 14.33. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Example OVN-Kubernetes configuration defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 14.34. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 14.3.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on ppc64le only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 14.3.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Power infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Follow either the steps to use an ISO image or network PXE booting to install RHCOS on the machines. 14.3.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 14.3.11.1.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 14.3.11.1.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following table provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=name[:network_interfaces][:options] name is the bonding device name ( bond0 ), network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 14.3.11.2. Installing RHCOS by using PXE booting You can use PXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.9-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE installation for the RHCOS images and begin the installation. Modify the following example menu entry for your environment and verify that the image and Ignition files are properly accessible: 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 14.3.11.3. Enabling multipathing with kernel arguments on RHCOS In OpenShift Container Platform 4.9 or later, during installation, you can enable multipathing for provisioned nodes. RHCOS supports multipathing on the primary disk. Multipathing provides added benefits of stronger resilience to hardware failure to achieve higher host availability. During the initial cluster creation, you might want to add kernel arguments to all master or worker nodes. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. Create a machine config file. For example, create a 99-master-kargs-mpath.yaml that instructs the cluster to add the master label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' To enable multipathing post-installation on worker nodes: Create a machine config file. For example, create a 99-worker-kargs-mpath.yaml that instructs the cluster to add the worker label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' You can now continue on to create the cluster. Important Additional post-installation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Post-installation machine configuration tasks . In case of MPIO failure, use the bootlist command to update the boot device list with alternate logical device names. The command displays a boot list and it designates the possible boot devices for when the system is booted in normal mode. To display a boot list and specify the possible boot devices if the system is booted in normal mode, enter the following command: USD bootlist -m normal -o sda To update the boot list for normal mode and add alternate device names, enter the following command: USD bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde If the original boot disk path is down, the node reboots from the alternate device registered in the normal boot device list. 14.3.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 14.3.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 14.3.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 14.3.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Configure the Operators that are not available. 14.3.15.1. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 14.3.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 14.3.15.2.1. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 14.3.15.2.2. Configuring registry storage for IBM Power As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Power. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 14.3.15.2.3. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 14.3.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. Additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. 14.3.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 14.3.18. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture : ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "bootlist -m normal -o sda", "bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture : ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 14 sshKey: 'ssh-ed25519 AAAA...' 15 additionalTrustBundle: | 16 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 17 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "bootlist -m normal -o sda", "bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/installing/installing-on-ibm-power
Chapter 5. Pipelines CLI (tkn)
Chapter 5. Pipelines CLI (tkn) 5.1. Installing tkn Use the CLI tool to manage Red Hat OpenShift Pipelines from a terminal. The following section describes how to install the CLI tool on different platforms. You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools . :FeatureName: Running Red Hat OpenShift Pipelines on ARM hardware Important {FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note Both the archives and the RPMs contain the following executables: tkn tkn-pac opc Important Running Red Hat OpenShift Pipelines with the opc CLI tool is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.1.1. Installing the Red Hat OpenShift Pipelines CLI on Linux For Linux distributions, you can download the CLI as a tar.gz archive. Procedure Download the relevant CLI tool. Linux (x86_64, amd64) Linux on IBM Z(R) and IBM(R) LinuxONE (s390x) Linux on IBM Power(R) (ppc64le) Linux on ARM (aarch64, arm64) Unpack the archive: USD tar xvzf <file> Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: USD echo USDPATH 5.1.2. Installing the Red Hat OpenShift Pipelines CLI on Linux using an RPM For Red Hat Enterprise Linux (RHEL) version 8, you can install the Red Hat OpenShift Pipelines CLI as an RPM. Prerequisites You have an active OpenShift Container Platform subscription on your Red Hat account. You have root or sudo privileges on your local system. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*pipelines*' In the output for the command, find the pool ID for your OpenShift Container Platform subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by Red Hat OpenShift Pipelines: Linux (x86_64, amd64) # subscription-manager repos --enable="pipelines-1.17-for-rhel-8-x86_64-rpms" Linux on IBM Z(R) and IBM(R) LinuxONE (s390x) # subscription-manager repos --enable="pipelines-1.17-for-rhel-8-s390x-rpms" Linux on IBM Power(R) (ppc64le) # subscription-manager repos --enable="pipelines-1.17-for-rhel-8-ppc64le-rpms" Linux on ARM (aarch64, arm64) # subscription-manager repos --enable="pipelines-1.17-for-rhel-8-aarch64-rpms" Install the openshift-pipelines-client package: # yum install openshift-pipelines-client After you install the CLI, it is available using the tkn command: USD tkn version 5.1.3. Installing the Red Hat OpenShift Pipelines CLI on Windows For Windows, you can download the CLI as a zip archive. Procedure Download the CLI tool . Extract the archive with a ZIP program. Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: C:\> path 5.1.4. Installing the Red Hat OpenShift Pipelines CLI on macOS For macOS, you can download the CLI as a tar.gz archive. Procedure Download the relevant CLI tool. macOS macOS on ARM Unpack and extract the archive. Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: USD echo USDPATH 5.2. Configuring the OpenShift Pipelines tkn CLI Configure the Red Hat OpenShift Pipelines tkn CLI to enable tab completion. 5.2.1. Enabling tab completion After you install the tkn CLI, you can enable tab completion to automatically complete tkn commands or suggest options when you press Tab. Prerequisites You must have the tkn CLI tool installed. You must have bash-completion installed on your local system. Procedure The following procedure enables tab completion for Bash. Save the Bash completion code to a file: USD tkn completion bash > tkn_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp tkn_bash_completion /etc/bash_completion.d/ Alternatively, you can save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal. 5.3. OpenShift Pipelines tkn reference This section lists the basic tkn CLI commands. 5.3.1. Basic syntax tkn [command or options] [arguments... ] 5.3.2. Global options --help, -h 5.3.3. Utility commands 5.3.3.1. tkn Parent command for tkn CLI. Example: Display all options USD tkn 5.3.3.2. completion [shell] Print shell completion code which must be evaluated to provide interactive completion. Supported shells are bash and zsh . Example: Completion code for bash shell USD tkn completion bash 5.3.3.3. version Print version information of the tkn CLI. Example: Check the tkn version USD tkn version 5.3.4. Pipelines management commands 5.3.4.1. pipeline Manage pipelines. Example: Display help USD tkn pipeline --help 5.3.4.2. pipeline delete Delete a pipeline. Example: Delete the mypipeline pipeline from a namespace USD tkn pipeline delete mypipeline -n myspace 5.3.4.3. pipeline describe Describe a pipeline. Example: Describe the mypipeline pipeline USD tkn pipeline describe mypipeline 5.3.4.4. pipeline list Display a list of pipelines. Example: Display a list of pipelines USD tkn pipeline list 5.3.4.5. pipeline logs Display the logs for a specific pipeline. Example: Stream the live logs for the mypipeline pipeline USD tkn pipeline logs -f mypipeline 5.3.4.6. pipeline start Start a pipeline. Example: Start the mypipeline pipeline USD tkn pipeline start mypipeline 5.3.5. Pipeline run commands 5.3.5.1. pipelinerun Manage pipeline runs. Example: Display help USD tkn pipelinerun -h 5.3.5.2. pipelinerun cancel Cancel a pipeline run. Example: Cancel the mypipelinerun pipeline run from a namespace USD tkn pipelinerun cancel mypipelinerun -n myspace 5.3.5.3. pipelinerun delete Delete a pipeline run. Example: Delete pipeline runs from a namespace USD tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace Example: Delete all pipeline runs from a namespace, except the five most recently executed pipeline runs USD tkn pipelinerun delete -n myspace --keep 5 1 1 Replace 5 with the number of most recently executed pipeline runs you want to retain. Example: Delete all pipelines USD tkn pipelinerun delete --all Note Starting with Red Hat OpenShift Pipelines 1.6, the tkn pipelinerun delete --all command does not delete any resources that are in the running state. 5.3.5.4. pipelinerun describe Describe a pipeline run. Example: Describe the mypipelinerun pipeline run in a namespace USD tkn pipelinerun describe mypipelinerun -n myspace 5.3.5.5. pipelinerun list List pipeline runs. Example: Display a list of pipeline runs in a namespace USD tkn pipelinerun list -n myspace 5.3.5.6. pipelinerun logs Display the logs of a pipeline run. Example: Display the logs of the mypipelinerun pipeline run with all tasks and steps in a namespace USD tkn pipelinerun logs mypipelinerun -a -n myspace 5.3.6. Task management commands 5.3.6.1. task Manage tasks. Example: Display help USD tkn task -h 5.3.6.2. task delete Delete a task. Example: Delete mytask1 and mytask2 tasks from a namespace USD tkn task delete mytask1 mytask2 -n myspace 5.3.6.3. task describe Describe a task. Example: Describe the mytask task in a namespace USD tkn task describe mytask -n myspace 5.3.6.4. task list List tasks. Example: List all the tasks in a namespace USD tkn task list -n myspace 5.3.6.5. task logs Display task logs. Example: Display logs for the mytaskrun task run of the mytask task USD tkn task logs mytask mytaskrun -n myspace 5.3.6.6. task start Start a task. Example: Start the mytask task in a namespace USD tkn task start mytask -s <ServiceAccountName> -n myspace 5.3.7. Task run commands 5.3.7.1. taskrun Manage task runs. Example: Display help USD tkn taskrun -h 5.3.7.2. taskrun cancel Cancel a task run. Example: Cancel the mytaskrun task run from a namespace USD tkn taskrun cancel mytaskrun -n myspace 5.3.7.3. taskrun delete Delete a TaskRun. Example: Delete the mytaskrun1 and mytaskrun2 task runs from a namespace USD tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace Example: Delete all but the five most recently executed task runs from a namespace USD tkn taskrun delete -n myspace --keep 5 1 1 Replace 5 with the number of most recently executed task runs you want to retain. 5.3.7.4. taskrun describe Describe a task run. Example: Describe the mytaskrun task run in a namespace USD tkn taskrun describe mytaskrun -n myspace 5.3.7.5. taskrun list List task runs. Example: List all the task runs in a namespace USD tkn taskrun list -n myspace 5.3.7.6. taskrun logs Display task run logs. Example: Display live logs for the mytaskrun task run in a namespace USD tkn taskrun logs -f mytaskrun -n myspace 5.3.8. Condition management commands 5.3.8.1. condition Manage Conditions. Example: Display help USD tkn condition --help 5.3.8.2. condition delete Delete a Condition. Example: Delete the mycondition1 Condition from a namespace USD tkn condition delete mycondition1 -n myspace 5.3.8.3. condition describe Describe a Condition. Example: Describe the mycondition1 Condition in a namespace USD tkn condition describe mycondition1 -n myspace 5.3.8.4. condition list List Conditions. Example: List Conditions in a namespace USD tkn condition list -n myspace 5.3.9. Pipeline Resource management commands 5.3.9.1. resource Manage Pipeline Resources. Example: Display help USD tkn resource -h 5.3.9.2. resource create Create a Pipeline Resource. Example: Create a Pipeline Resource in a namespace USD tkn resource create -n myspace This is an interactive command that asks for input on the name of the Resource, type of the Resource, and the values based on the type of the Resource. 5.3.9.3. resource delete Delete a Pipeline Resource. Example: Delete the myresource Pipeline Resource from a namespace USD tkn resource delete myresource -n myspace 5.3.9.4. resource describe Describe a Pipeline Resource. Example: Describe the myresource Pipeline Resource USD tkn resource describe myresource -n myspace 5.3.9.5. resource list List Pipeline Resources. Example: List all Pipeline Resources in a namespace USD tkn resource list -n myspace 5.3.10. ClusterTask management commands Important In Red Hat OpenShift Pipelines 1.10, ClusterTask functionality of the tkn command line utility is deprecated and is planned to be removed in a future release. 5.3.10.1. clustertask Manage ClusterTasks. Example: Display help USD tkn clustertask --help 5.3.10.2. clustertask delete Delete a ClusterTask resource in a cluster. Example: Delete mytask1 and mytask2 ClusterTasks USD tkn clustertask delete mytask1 mytask2 5.3.10.3. clustertask describe Describe a ClusterTask. Example: Describe the mytask ClusterTask USD tkn clustertask describe mytask1 5.3.10.4. clustertask list List ClusterTasks. Example: List ClusterTasks USD tkn clustertask list 5.3.10.5. clustertask start Start ClusterTasks. Example: Start the mytask ClusterTask USD tkn clustertask start mytask 5.3.11. Trigger management commands 5.3.11.1. eventlistener Manage EventListeners. Example: Display help USD tkn eventlistener -h 5.3.11.2. eventlistener delete Delete an EventListener. Example: Delete mylistener1 and mylistener2 EventListeners in a namespace USD tkn eventlistener delete mylistener1 mylistener2 -n myspace 5.3.11.3. eventlistener describe Describe an EventListener. Example: Describe the mylistener EventListener in a namespace USD tkn eventlistener describe mylistener -n myspace 5.3.11.4. eventlistener list List EventListeners. Example: List all the EventListeners in a namespace USD tkn eventlistener list -n myspace 5.3.11.5. eventlistener logs Display logs of an EventListener. Example: Display the logs of the mylistener EventListener in a namespace USD tkn eventlistener logs mylistener -n myspace 5.3.11.6. triggerbinding Manage TriggerBindings. Example: Display TriggerBindings help USD tkn triggerbinding -h 5.3.11.7. triggerbinding delete Delete a TriggerBinding. Example: Delete mybinding1 and mybinding2 TriggerBindings in a namespace USD tkn triggerbinding delete mybinding1 mybinding2 -n myspace 5.3.11.8. triggerbinding describe Describe a TriggerBinding. Example: Describe the mybinding TriggerBinding in a namespace USD tkn triggerbinding describe mybinding -n myspace 5.3.11.9. triggerbinding list List TriggerBindings. Example: List all the TriggerBindings in a namespace USD tkn triggerbinding list -n myspace 5.3.11.10. triggertemplate Manage TriggerTemplates. Example: Display TriggerTemplate help USD tkn triggertemplate -h 5.3.11.11. triggertemplate delete Delete a TriggerTemplate. Example: Delete mytemplate1 and mytemplate2 TriggerTemplates in a namespace USD tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace` 5.3.11.12. triggertemplate describe Describe a TriggerTemplate. Example: Describe the mytemplate TriggerTemplate in a namespace USD tkn triggertemplate describe mytemplate -n `myspace` 5.3.11.13. triggertemplate list List TriggerTemplates. Example: List all the TriggerTemplates in a namespace USD tkn triggertemplate list -n myspace 5.3.11.14. clustertriggerbinding Manage ClusterTriggerBindings. Example: Display ClusterTriggerBindings help USD tkn clustertriggerbinding -h 5.3.11.15. clustertriggerbinding delete Delete a ClusterTriggerBinding. Example: Delete myclusterbinding1 and myclusterbinding2 ClusterTriggerBindings USD tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2 5.3.11.16. clustertriggerbinding describe Describe a ClusterTriggerBinding. Example: Describe the myclusterbinding ClusterTriggerBinding USD tkn clustertriggerbinding describe myclusterbinding 5.3.11.17. clustertriggerbinding list List ClusterTriggerBindings. Example: List all ClusterTriggerBindings USD tkn clustertriggerbinding list 5.3.12. Hub interaction commands Interact with Tekton Hub for resources such as tasks and pipelines. 5.3.12.1. hub Interact with hub. Example: Display help USD tkn hub -h Example: Interact with a hub API server USD tkn hub --api-server https://api.hub.tekton.dev Note For each example, to get the corresponding sub-commands and flags, run tkn hub <command> --help . 5.3.12.2. hub downgrade Downgrade an installed resource. Example: Downgrade the mytask task in the mynamespace namespace to it's older version USD tkn hub downgrade task mytask --to version -n mynamespace 5.3.12.3. hub get Get a resource manifest by its name, kind, catalog, and version. Example: Get the manifest for a specific version of the myresource pipeline or task from the tekton catalog USD tkn hub get [pipeline | task] myresource --from tekton --version version 5.3.12.4. hub info Display information about a resource by its name, kind, catalog, and version. Example: Display information about a specific version of the mytask task from the tekton catalog USD tkn hub info task mytask --from tekton --version version 5.3.12.5. hub install Install a resource from a catalog by its kind, name, and version. Example: Install a specific version of the mytask task from the tekton catalog in the mynamespace namespace USD tkn hub install task mytask --from tekton --version version -n mynamespace 5.3.12.6. hub reinstall Reinstall a resource by its kind and name. Example: Reinstall a specific version of the mytask task from the tekton catalog in the mynamespace namespace USD tkn hub reinstall task mytask --from tekton --version version -n mynamespace 5.3.12.7. hub search Search a resource by a combination of name, kind, and tags. Example: Search a resource with a tag cli USD tkn hub search --tags cli 5.3.12.8. hub upgrade Upgrade an installed resource. Example: Upgrade the installed mytask task in the mynamespace namespace to a new version USD tkn hub upgrade task mytask --to version -n mynamespace
[ "tar xvzf <file>", "echo USDPATH", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches '*pipelines*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"pipelines-1.17-for-rhel-8-x86_64-rpms\"", "subscription-manager repos --enable=\"pipelines-1.17-for-rhel-8-s390x-rpms\"", "subscription-manager repos --enable=\"pipelines-1.17-for-rhel-8-ppc64le-rpms\"", "subscription-manager repos --enable=\"pipelines-1.17-for-rhel-8-aarch64-rpms\"", "yum install openshift-pipelines-client", "tkn version", "C:\\> path", "echo USDPATH", "tkn completion bash > tkn_bash_completion", "sudo cp tkn_bash_completion /etc/bash_completion.d/", "tkn", "tkn completion bash", "tkn version", "tkn pipeline --help", "tkn pipeline delete mypipeline -n myspace", "tkn pipeline describe mypipeline", "tkn pipeline list", "tkn pipeline logs -f mypipeline", "tkn pipeline start mypipeline", "tkn pipelinerun -h", "tkn pipelinerun cancel mypipelinerun -n myspace", "tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace", "tkn pipelinerun delete -n myspace --keep 5 1", "tkn pipelinerun delete --all", "tkn pipelinerun describe mypipelinerun -n myspace", "tkn pipelinerun list -n myspace", "tkn pipelinerun logs mypipelinerun -a -n myspace", "tkn task -h", "tkn task delete mytask1 mytask2 -n myspace", "tkn task describe mytask -n myspace", "tkn task list -n myspace", "tkn task logs mytask mytaskrun -n myspace", "tkn task start mytask -s <ServiceAccountName> -n myspace", "tkn taskrun -h", "tkn taskrun cancel mytaskrun -n myspace", "tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace", "tkn taskrun delete -n myspace --keep 5 1", "tkn taskrun describe mytaskrun -n myspace", "tkn taskrun list -n myspace", "tkn taskrun logs -f mytaskrun -n myspace", "tkn condition --help", "tkn condition delete mycondition1 -n myspace", "tkn condition describe mycondition1 -n myspace", "tkn condition list -n myspace", "tkn resource -h", "tkn resource create -n myspace", "tkn resource delete myresource -n myspace", "tkn resource describe myresource -n myspace", "tkn resource list -n myspace", "tkn clustertask --help", "tkn clustertask delete mytask1 mytask2", "tkn clustertask describe mytask1", "tkn clustertask list", "tkn clustertask start mytask", "tkn eventlistener -h", "tkn eventlistener delete mylistener1 mylistener2 -n myspace", "tkn eventlistener describe mylistener -n myspace", "tkn eventlistener list -n myspace", "tkn eventlistener logs mylistener -n myspace", "tkn triggerbinding -h", "tkn triggerbinding delete mybinding1 mybinding2 -n myspace", "tkn triggerbinding describe mybinding -n myspace", "tkn triggerbinding list -n myspace", "tkn triggertemplate -h", "tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace`", "tkn triggertemplate describe mytemplate -n `myspace`", "tkn triggertemplate list -n myspace", "tkn clustertriggerbinding -h", "tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2", "tkn clustertriggerbinding describe myclusterbinding", "tkn clustertriggerbinding list", "tkn hub -h", "tkn hub --api-server https://api.hub.tekton.dev", "tkn hub downgrade task mytask --to version -n mynamespace", "tkn hub get [pipeline | task] myresource --from tekton --version version", "tkn hub info task mytask --from tekton --version version", "tkn hub install task mytask --from tekton --version version -n mynamespace", "tkn hub reinstall task mytask --from tekton --version version -n mynamespace", "tkn hub search --tags cli", "tkn hub upgrade task mytask --to version -n mynamespace" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/cli_tools/pipelines-cli-tkn
Part II. Storage Administration
Part II. Storage Administration The Storage Administration section starts with storage considerations for Red Hat Enterprise Linux 7. Instructions regarding partitions, logical volume management, and swap partitions follow this. Disk Quotas, RAID systems are , followed by the functions of mount command, volume_key, and acls. SSD tuning, write barriers, I/O limits and diskless systems follow this. The large chapter of Online Storage is , and finally device mapper multipathing and virtual storage to finish.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/part-storage-admin
Chapter 5. Installing a cluster with RHEL KVM on IBM Z and IBM(R) LinuxONE in a restricted network
Chapter 5. Installing a cluster with RHEL KVM on IBM Z and IBM(R) LinuxONE in a restricted network In OpenShift Container Platform version 4.13, you can install a cluster on IBM Z or IBM(R) LinuxONE infrastructure that you provision in a restricted network. Note While this document refers to only IBM Z, all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. You must move or remove any existing installation files, before you begin the installation process. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are done from a machine with access to the installation media. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. You provisioned a RHEL Kernel Virtual Machine (KVM) system that is hosted on the logical partition (LPAR) and based on RHEL 8.6 or later. See Red Hat Enterprise Linux 8 and 9 Life Cycle . 5.2. About installations in restricted networks In OpenShift Container Platform 4.13, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 5.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 5.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 5.4. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. One or more KVM host machines based on RHEL 8.6 or later. Each RHEL KVM host machine must have libvirt installed and running. The virtual machines are provisioned under each RHEL KVM host machine. 5.4.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: Table 5.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different RHEL instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. See Red Hat Enterprise Linux technology capabilities and limits . 5.4.2. Network connectivity requirements The OpenShift Container Platform installer creates the Ignition files, which are necessary for all the Red Hat Enterprise Linux CoreOS (RHCOS) virtual machines. The automated installation of OpenShift Container Platform is performed by the bootstrap machine. It starts the installation of OpenShift Container Platform on each node, starts the Kubernetes cluster, and then finishes. During this bootstrap, the virtual machine must have an established network connection either through a Dynamic Host Configuration Protocol (DHCP) server or static IP address. 5.4.3. IBM Z network connectivity requirements To install on IBM Z under RHEL KVM, you need: A RHEL KVM host configured with an OSA or RoCE network adapter. Either a RHEL KVM host that is configured to use bridged networking in libvirt or MacVTap to connect the network to the guests. See Types of virtual network connections . 5.4.4. Host machine resource requirements The RHEL KVM host in your environment must meet the following requirements to host the virtual machines that you plan for the OpenShift Container Platform environment. See Getting started with virtualization . You can install OpenShift Container Platform version 4.13 on the following IBM hardware: IBM z16 (all models), IBM z15 (all models), IBM z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II 5.4.5. Minimum IBM Z system environment Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One LPAR running on RHEL 8.6 or later with KVM, which is managed by libvirt On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine 5.4.6. Minimum resource requirements Each cluster virtual machine must meet the following minimum requirements: Virtual Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 5.4.7. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Operating system requirements For high availability, two or three LPARs running on RHEL 8.6 or later with KVM, which are managed by libvirt. On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, distributed across the RHEL KVM host machines. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the RHEL KVM host machines. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using cpu_shares . Do the same for infrastructure nodes, if they exist. See schedinfo in IBM Documentation. 5.4.8. Preferred resource requirements The preferred requirements for each cluster virtual machine are: Virtual Machine Operating System vCPU Virtual RAM Storage Bootstrap RHCOS 4 16 GB 120 GB Control plane RHCOS 8 16 GB 120 GB Compute RHCOS 6 8 GB 120 GB 5.4.9. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources Recommended host practices for IBM Z & IBM(R) LinuxONE environments 5.4.10. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 5.4.10.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 5.4.10.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 5.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 5.3. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 5.4. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 5.4.11. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 5.5. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 5.4.11.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 5.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 5.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 5.4.12. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 5.6. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 5.7. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 5.4.12.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 5.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 5.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Choose to perform either a fast track installation of Red Hat Enterprise Linux CoreOS (RHCOS) or a full installation of Red Hat Enterprise Linux CoreOS (RHCOS). For the full installation, you must set up an HTTP or HTTPS server to provide Ignition files and install images to the cluster nodes. For the fast track installation an HTTP or HTTPS server is not required, however, a DHCP server is required. See sections "Fast-track installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines" and "Full installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines". Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 5.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 5.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z 5.8.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. 18 Provide the imageContentSources section from the output of the command to mirror the repository. 5.8.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.8.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 5.9. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 5.9.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 5.8. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 5.9. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 5.10. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 5.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 5.12. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 5.13. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 5.14. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 5.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 5.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) as Red Hat Enterprise Linux (RHEL) guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. You can perform a fast-track installation of RHCOS that uses a prepackaged QEMU copy-on-write (QCOW2) disk image. Alternatively, you can perform a full installation on a new QCOW2 disk image. To add further security to your system, you can optionally install RHCOS using IBM Secure Execution before proceeding to the fast-track installation. 5.11.1. Installing RHCOS using IBM Secure Execution Before you install RHCOS using IBM Secure Execution, you must prepare the underlying infrastructure. Prerequisites IBM z15 or later, or IBM(R) LinuxONE III or later. Red Hat Enterprise Linux (RHEL) 8 or later. You have a bootstrap Ignition file. The file is not protected, enabling others to view and edit it. You have verified that the boot image has not been altered after installation. You must run all your nodes as IBM Secure Execution guests. Procedure Prepare your RHEL KVM host to support IBM Secure Execution. By default, KVM hosts do not support guests in IBM Secure Execution mode. To support guests in IBM Secure Execution mode, KVM hosts must boot in LPAR mode with the kernel parameter specification prot_virt=1 . To enable prot_virt=1 on RHEL 8, follow these steps: Navigate to /boot/loader/entries/ to modify your bootloader configuration file *.conf . Add the kernel command line parameter prot_virt=1 . Run the zipl command and reboot your system. KVM hosts that successfully start with support for IBM Secure Execution for Linux issue the following kernel message: prot_virt: Reserving <amount>MB as ultravisor base storage. To verify that the KVM host now supports IBM Secure Execution, run the following command: # cat /sys/firmware/uv/prot_virt_host Example output 1 The value of this attribute is 1 for Linux instances that detect their environment as consistent with that of a secure host. For other instances, the value is 0. Add your host keys to the KVM guest via Ignition. During the first boot, RHCOS looks for your host keys to re-encrypt itself with them. RHCOS searches for files starting with ibm-z-hostkey- in the /etc/se-hostkeys directory. All host keys, for each machine the cluster is running on, must be loaded into the directory by the administrator. After first boot, you cannot run the VM on any other machines. Note You need to prepare your Ignition file on a safe system. For example, another IBM Secure Execution guest. For example: { "ignition": { "version": "3.0.0" }, "storage": { "files": [ { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 }, { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 } ] } } ``` Note You can add as many host keys as required if you want your node to be able to run on multiple IBM Z machines. To generate the Base64 encoded string, run the following command: base64 <your-hostkey>.crt Compared to guests not running IBM Secure Execution, the first boot of the machine is longer because the entire image is encrypted with a randomly generated LUKS passphrase before the Ignition phase. Add Ignition protection To protect the secrets that are stored in the Ignition config file from being read or even modified, you must encrypt the Ignition config file. Note To achieve the desired security, Ignition logging and local login are disabled by default when running IBM Secure Execution. Fetch the public GPG key for the secex-qemu.qcow2 image and encrypt the Ignition config with the key by running the following command: gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign Note Before starting the VM, replace serial=ignition with serial=ignition_crypted when mounting the Ignition file. When Ignition runs on the first boot, and the decryption is successful, you will see an output like the following example: Example output [ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup... [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor. If the decryption fails, you will see an output like the following example: Example output Starting coreos-ignition-s...reOS Ignition User Config Setup... [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key Follow the fast-track installation procedure to install nodes using the IBM Secure Exection QCOW image. Additional resources Introducing IBM Secure Execution for Linux Linux as an IBM Secure Execution host or guest 5.11.2. Configuring NBDE with static IP in an IBM Z or IBM(R) LinuxONE environment Enabling NBDE disk encryption in an IBM Z or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.13.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 has not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial-number> --dest-karg-append \ ip=<ip-address>::<gateway-ip>:<subnet-mask>::<network-device>:none \ --dest-karg-append nameserver=<nameserver-ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<Node-name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine: rd.neednet=1 \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal \ coreos.live.rootfs_url=http://10.19.17.25/redhat/ocp/rhcos-413.86.202302201445-0/rhcos-413.86.202302201445-0-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://bastion.ocp-cluster1.example.com:8080/ignition/master.ign \ ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 \ zfcp.allow_lun_scan=0 \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 5.11.3. Fast-track installation by using a prepackaged QCOW2 disk image Complete the following steps to create the machines in a fast-track installation of Red Hat Enterprise Linux CoreOS (RHCOS), importing a prepackaged Red Hat Enterprise Linux CoreOS (RHCOS) QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. A DHCP server that provides IP addresses. Procedure Obtain the RHEL QEMU copy-on-write (QCOW2) disk image file from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. Download the QCOW2 disk image and Ignition files to a common directory on the RHEL KVM host. For example: /var/lib/libvirt/images Note The Ignition files are generated by the OpenShift Container Platform installer. Create a new disk image with the QCOW2 disk image backing file for each KVM guest node. USD qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size} Create the new KVM guest nodes using the Ignition file and the new disk image. USD virt-install --noautoconsole \ --connect qemu:///system \ --name {vn_name} \ --memory {memory} \ --vcpus {vcpus} \ --disk {disk} \ --import \ --network network={network},mac={mac} \ --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 1 1 If IBM Secure Execution is enabled, replace serial=ignition with serial=ignition_crypted . 5.11.4. Full installation on a new QCOW2 disk image Complete the following steps to create the machines in a full installation on a new QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. An HTTP or HTTPS server is set up. Procedure Obtain the RHEL kernel, initramfs, and rootfs files from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Move the downloaded RHEL live kernel, initramfs, and rootfs as well as the Ignition files to an HTTP or HTTPS server before you launch virt-install . Note The Ignition files are generated by the OpenShift Container Platform installer. Create the new KVM guest nodes using the RHEL kernel, initramfs, and Ignition files, the new disk image, and adjusted parm line arguments. For --location , specify the location of the kernel/initrd on the HTTP or HTTPS server. For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. USD virt-install \ --connect qemu:///system \ --name {vn_name} \ --vcpus {vcpus} \ --memory {memory_mb} \ --disk {vn_name}.qcow2,size={image_size| default(10,true)} \ --network network={virt_network_parm} \ --boot hd \ --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} \ --extra-args "rd.neednet=1 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vn_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}" \ --noautoconsole \ --wait 5.11.5. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 5.11.5.1. Networking options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 5.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 5.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 5.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 5.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Configure the Operators that are not available. 5.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 5.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 5.15.2.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 5.15.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 5.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. Additional resources How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH . 5.17. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "prot_virt: Reserving <amount>MB as ultravisor base storage.", "cat /sys/firmware/uv/prot_virt_host", "1", "{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```", "base64 <your-hostkey>.crt", "gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign", "[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.", "Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key", "variant: openshift version: 4.13.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2", "coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial-number> --dest-karg-append ip=<ip-address>::<gateway-ip>:<subnet-mask>::<network-device>:none --dest-karg-append nameserver=<nameserver-ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<Node-name>-initramfs.s390x.img", "rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://10.19.17.25/redhat/ocp/rhcos-413.86.202302201445-0/rhcos-413.86.202302201445-0-live-rootfs.s390x.img coreos.inst.ignition_url=http://bastion.ocp-cluster1.example.com:8080/ignition/master.ign ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000", "qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}", "virt-install --noautoconsole --connect qemu:///system --name {vn_name} --memory {memory} --vcpus {vcpus} --disk {disk} --import --network network={network},mac={mac} --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 1", "virt-install --connect qemu:///system --name {vn_name} --vcpus {vcpus} --memory {memory_mb} --disk {vn_name}.qcow2,size={image_size| default(10,true)} --network network={virt_network_parm} --boot hd --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} --extra-args \"rd.neednet=1 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vn_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}\" --noautoconsole --wait", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_z_and_ibm_linuxone/installing-restricted-networks-ibm-z-kvm
Chapter 1. Red Hat OpenShift support for Windows Containers overview
Chapter 1. Red Hat OpenShift support for Windows Containers overview Red Hat OpenShift support for Windows Containers is a feature providing the ability to run Windows compute nodes in an OpenShift Container Platform cluster. This is possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes. With a Red Hat subscription, you can get support for running Windows workloads in OpenShift Container Platform. Windows instances deployed by the WMCO are configured with the containerd container runtime. For more information, see the release notes . You can add Windows nodes either by creating a compute machine set or by specifying existing Bring-Your-Own-Host (BYOH) Window instances through a configuration map . Note Compute machine sets are not supported for bare metal or provider agnostic clusters. For workloads including both Linux and Windows, OpenShift Container Platform allows you to deploy Windows workloads running on Windows Server containers while also providing traditional Linux workloads hosted on Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL). For more information, see getting started with Windows container workloads . You need the WMCO to run Windows workloads in your cluster. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. For more information, see how to enable Windows container workloads . You can create a Windows MachineSet object to create infrastructure Windows machine sets and related machines so that you can move supported Windows workloads to the new Windows machines. You can create a Windows MachineSet object on multiple platforms. You can schedule Windows workloads to Windows compute nodes. You can perform Windows Machine Config Operator upgrades to ensure that your Windows nodes have the latest updates. You can remove a Windows node by deleting a specific machine. You can use Bring-Your-Own-Host (BYOH) Windows instances to repurpose Windows Server VMs and bring them to OpenShift Container Platform. BYOH Windows instances benefit users who are looking to mitigate major disruptions in the event that a Windows server goes offline. You can use BYOH Windows instances as nodes on OpenShift Container Platform 4.8 and later versions. You can disable Windows container workloads by performing the following: Uninstalling the Windows Machine Config Operator Deleting the Windows Machine Config Operator namespace
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/windows_container_support_for_openshift/windows-container-overview
Chapter 2. Preparing your Environment for Satellite Installation in an IPv6 Network
Chapter 2. Preparing your Environment for Satellite Installation in an IPv6 Network You can install and use Satellite in an IPv6 network. Before installing Satellite in an IPv6 network, view the limitations and ensure that you meet the requirements. To provision hosts in an IPv6 network, after installing Satellite, you must also configure Satellite for the UEFI HTTP boot provisioning. For more information, see Section 4.5, "Configuring Satellite for UEFI HTTP Boot Provisioning in an IPv6 Network" . 2.1. Limitations of Satellite Installation in an IPv6 Network Satellite installation in an IPv6 network has the following limitations: You can install Satellite and Capsules in IPv6-only systems, dual-stack installation is not supported. Although Satellite provisioning templates include IPv6 support for PXE and HTTP (iPXE) provisioning, the only tested and certified provisioning workflow is the UEFI HTTP Boot provisioning. This limitation only relates to users who plan to use Satellite to provision hosts. 2.2. Requirements for Satellite Installation in an IPv6 Network Before installing Satellite in an IPv6 network, ensure that you meet the following requirements: If you plan to provision hosts from Satellite or Capsules, you must install Satellite and Capsules on Red Hat Enterprise Linux version 7.9 or higher because these versions include the latest version of the grub2 package. You must deploy an external DHCP IPv6 server as a separate unmanaged service to bootstrap clients into GRUB2, which then configures IPv6 networking either using DHCPv6 or assigning static IPv6 address. This is required because the DHCP server in Red Hat Enterprise Linux (ISC DHCP) does not provide an integration API for managing IPv6 records, therefore the Capsule DHCP plug-in that provides DHCP management is limited to IPv4 subnets. You must deploy an external HTTP proxy server that supports both IPv4 and IPv6. This is required because Red Hat Content Delivery Network distributes content only over IPv4 networks, therefore you must use this proxy to pull content into the Satellite on your IPv6 network. You must configure Satellite to use this dual stack (supporting both IPv4 and IPv6) HTTP proxy server as the default proxy. For more information, see Adding a Default HTTP Proxy to Satellite .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_satellite_server_in_a_connected_network_environment/preparing-environment-for-installation-in-ipv6-network_satellite
2.3. Launching Red Hat Gluster Storage Instances
2.3. Launching Red Hat Gluster Storage Instances This section describes how to launch Red Hat Gluster Storage instances on Amazon Web Services. The supported configuration for three-way replication is up to 24 Amazon Elastic Block Store (EBS) volumes of equal size. Table 2.1. Supported Configuration on Amazon Web Services EBS Volume Type Minimum Number of Volumes per Instance Maximum Number of Volumes per Instance EBS Volume Capacity Range Magnetic 1 24 1 GiB - 1 TiB General purpose SSD 1 24 1 GiB - 16 TiB PIOPS SSD 1 24 4 GiB - 16 TiB Optimized HDD (ST1) 1 24 500 GiB - 16 TiB Cold HDD (SC1) 1 24 500 GiB - 16 TiB Creation of Red Hat Gluster Storage volume snapshots is supported on magnetic, general purpose SSD and PIOPS EBS volumes. You can also browse the snapshot content using USS. For information on managing Red Hat Gluster Storage volume snapshots see chapter Managing Snapshots in the Red Hat Gluster Storage Administration Guide Warning Tiering is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use and does not support tiering in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. Warning Gluster-NFS is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends the use of Gluster-NFS and does not support its use in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. Warning Using RDMA as a transport protocol is considered deprecated in Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use and does not support it on new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. Amazon Web Service environment supports the Red Hat Gluster Storage tiering feature. You can attach bricks created out of PIOPS or general purpose SSD volumes as hot tier to an existing or new Red Hat Gluster Storage volume created out of magnetic EBS volumes. For information on creation of tiered volumes see chapter Managing Tiering in the Red Hat Gluster Storage Administration Guide To launch the Red Hat Gluster Storage Instance Navigate to the Amazon Web Services home page at http://aws.amazon.com . Log in to Amazon Web Services. The AWS Management Console screen displays. Click the EC2 option. The EC2 Management Console displays. Click Launch Instance . The Step 1: Choose an Amazon Machine Image (AMI) screen is displayed. Click My AMIs and select the Shared with me checkbox. Search for the required AMI, and click Select corresponding to the AMI. The Step 2: Choose an Instance Type screen displays. Look for the required type of instance, and select it using the radio button corresponding to the instance type. Click : Configure Instance Details . The Step 3: Configure Instance Details screen displays. Specify the configuration for your instance or continue with the default settings, and click : Add Storage . The Step 4: Add Storage screen displays. In the Step 4: Add Storage screen, specify the storage details, and click : Add Tags . The Step 5: Add Tags screen displays. Click Add and enter the required information in the Value field for each tag. Important Adding the Name tag is required. To add the Name tag, click click to add a Name tag . You can use this name later to verify that the instance is operating correctly. Click : Configure Security Group . The Step 6: Configure Security Group screen displays. Create a new security group or select an existing security group. Ensure to open the following TCP port numbers in the new or selected security group: 22 to allow ssh access to the instance created Click Review and Launch . The Step 7: Review Instance Launch screen displays. Review and edit the required settings, and click Launch . Choose an existing key pair or create a new key pair, and click Launch Instances . The Launch Status screen is displayed indicating that the instance is launching.
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/deployment_guide_for_public_cloud/launching_red_hat_storage_instances
14.5. web.xml
14.5. web.xml 14.5.1. Removing Unused Interfaces from web.xml (CA Only) Several legacy interfaces (for features like bulk issuance or the policy framework) are still included in the CA's web.xml file. However, since these features are deprecated and no longer in use, then they can be removed from the CA configuration to increase security. Stop the CA. OR (if using nuxwdog watchdog ) Open the web files directory for the CA. For example: Back up the current web.xml file. Edit the web.xml file and remove the entire <servlet> entries for each of the following deprecated servlets: caadminEnroll cabulkissuance cacertbasedenrollment caenrollment caProxyBulkIssuance For example, remove the caadminEnroll servlet entry: <servlet> <servlet-name> caadminEnroll </servlet-name> <servlet-class> com.netscape.cms.servlet.cert.EnrollServlet </servlet-class> <init-param><param-name> GetClientCert </param-name> <param-value> false </param-value> </init-param> <init-param><param-name> successTemplate </param-name> <param-value> /admin/ca/EnrollSuccess.template </param-value> </init-param> <init-param><param-name> AuthzMgr </param-name> <param-value> BasicAclAuthz </param-value> </init-param> <init-param><param-name> authority </param-name> <param-value> ca </param-value> </init-param> <init-param><param-name> interface </param-name> <param-value> admin </param-value> </init-param> <init-param><param-name> ID </param-name> <param-value> caadminEnroll </param-value> </init-param> <init-param><param-name> resourceID </param-name> <param-value> certServer.admin.request.enrollment </param-value> </init-param> <init-param><param-name> AuthMgr </param-name> <param-value> passwdUserDBAuthMgr </param-value> </init-param> </servlet> After removing the servlet entries, remove the corresponding <servlet-mapping> entries. <servlet-mapping> <servlet-name> caadminEnroll </servlet-name> <url-pattern> /admin/ca/adminEnroll </url-pattern> </servlet-mapping> Remove three <filter-mapping> entries for an end-entity request interface. <filter-mapping> <filter-name> EERequestFilter </filter-name> <url-pattern> /certbasedenrollment </url-pattern> </filter-mapping> <filter-mapping> <filter-name> EERequestFilter </filter-name> <url-pattern> /enrollment </url-pattern> </filter-mapping> <filter-mapping> <filter-name> EERequestFilter </filter-name> <url-pattern> /profileSubmit </url-pattern> </filter-mapping> Start the CA again. OR (if using nuxwdog watchdog )
[ "pki-server stop instance_name", "systemctl stop pki-tomcatd-nuxwdog@ instance_name .service", "cd /var/lib/pki/ instance_name /ca/webapps/ca/WEB-INF", "cp web.xml web.xml.servlets", "<servlet> <servlet-name> caadminEnroll </servlet-name> <servlet-class> com.netscape.cms.servlet.cert.EnrollServlet </servlet-class> <init-param><param-name> GetClientCert </param-name> <param-value> false </param-value> </init-param> <init-param><param-name> successTemplate </param-name> <param-value> /admin/ca/EnrollSuccess.template </param-value> </init-param> <init-param><param-name> AuthzMgr </param-name> <param-value> BasicAclAuthz </param-value> </init-param> <init-param><param-name> authority </param-name> <param-value> ca </param-value> </init-param> <init-param><param-name> interface </param-name> <param-value> admin </param-value> </init-param> <init-param><param-name> ID </param-name> <param-value> caadminEnroll </param-value> </init-param> <init-param><param-name> resourceID </param-name> <param-value> certServer.admin.request.enrollment </param-value> </init-param> <init-param><param-name> AuthMgr </param-name> <param-value> passwdUserDBAuthMgr </param-value> </init-param> </servlet>", "<servlet-mapping> <servlet-name> caadminEnroll </servlet-name> <url-pattern> /admin/ca/adminEnroll </url-pattern> </servlet-mapping>", "<filter-mapping> <filter-name> EERequestFilter </filter-name> <url-pattern> /certbasedenrollment </url-pattern> </filter-mapping> <filter-mapping> <filter-name> EERequestFilter </filter-name> <url-pattern> /enrollment </url-pattern> </filter-mapping> <filter-mapping> <filter-name> EERequestFilter </filter-name> <url-pattern> /profileSubmit </url-pattern> </filter-mapping>", "pki-server start instance_name", "systemctl start pki-tomcatd-nuxwdog@ instance_name .service" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/web-xml
5.2. AutoCommitTxn Execution Property
5.2. AutoCommitTxn Execution Property Since user level commands may execute multiple source commands, users can specify the AutoCommitTxn execution property to control the transactional behavior of a user command when not in a local or global transaction. Table 5.2. AutoCommitTxn Settings Setting Description OFF Do not wrap each command in a transaction. Individual source commands may commit or rollback regardless of the success or failure of the overall command. ON Wrap each command in a transaction. This mode is the safest, but may introduce performance overhead. DETECT This is the default setting. Will automatically wrap commands in a transaction, but only if the command seems to be transactionally unsafe. The concept of command safety with respect to a transaction is determined by Red Hat JBoss Data Virtualization based upon command type, the transaction isolation level, and available metadata. A wrapping transaction is not needed if any of the following is true: A user command is fully pushed to the source. The user command is a SELECT (including XML) and the transaction isolation is not REPEATABLE_READ nor SERIALIZABLE. The user command is a stored procedure and the transaction isolation is not REPEATABLE_READ nor SERIALIZABLE and the updating model count is zero. The update count may be set on all procedures as part of the procedure metadata in the model.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/autocommittxn_execution_property
Chapter 1. Administrator metrics
Chapter 1. Administrator metrics 1.1. Serverless administrator metrics Metrics enable cluster administrators to monitor how OpenShift Serverless cluster components and workloads are performing. You can view different metrics for OpenShift Serverless by navigating to Dashboards in the web console Administrator perspective. 1.1.1. Prerequisites See the OpenShift Container Platform documentation on Managing metrics for information about enabling metrics for your cluster. You have access to an account with cluster administrator access (or dedicated administrator access for OpenShift Dedicated or Red Hat OpenShift Service on AWS). You have access to the Administrator perspective in the web console. Warning If Service Mesh is enabled with mTLS, metrics for Knative Serving are disabled by default because Service Mesh prevents Prometheus from scraping metrics. For information about resolving this issue, see Enabling Knative Serving metrics when using Service Mesh with mTLS . Scraping the metrics does not affect autoscaling of a Knative service, because scraping requests do not go through the activator. Consequently, no scraping takes place if no pods are running. 1.2. Serverless controller metrics The following metrics are emitted by any component that implements a controller logic. These metrics show details about reconciliation operations and the work queue behavior upon which reconciliation requests are added to the work queue. Metric name Description Type Tags Unit work_queue_depth The depth of the work queue. Gauge reconciler Integer (no units) reconcile_count The number of reconcile operations. Counter reconciler , success Integer (no units) reconcile_latency The latency of reconcile operations. Histogram reconciler , success Milliseconds workqueue_adds_total The total number of add actions handled by the work queue. Counter name Integer (no units) workqueue_queue_latency_seconds The length of time an item stays in the work queue before being requested. Histogram name Seconds workqueue_retries_total The total number of retries that have been handled by the work queue. Counter name Integer (no units) workqueue_work_duration_seconds The length of time it takes to process and item from the work queue. Histogram name Seconds workqueue_unfinished_work_seconds The length of time that outstanding work queue items have been in progress. Histogram name Seconds workqueue_longest_running_processor_seconds The length of time that the longest outstanding work queue items has been in progress. Histogram name Seconds 1.3. Webhook metrics Webhook metrics report useful information about operations. For example, if a large number of operations fail, this might indicate an issue with a user-created resource. Metric name Description Type Tags Unit request_count The number of requests that are routed to the webhook. Counter admission_allowed , kind_group , kind_kind , kind_version , request_operation , resource_group , resource_namespace , resource_resource , resource_version Integer (no units) request_latencies The response time for a webhook request. Histogram admission_allowed , kind_group , kind_kind , kind_version , request_operation , resource_group , resource_namespace , resource_resource , resource_version Milliseconds 1.4. Knative Eventing metrics Cluster administrators can view the following metrics for Knative Eventing components. By aggregating the metrics from HTTP code, events can be separated into two categories; successful events (2xx) and failed events (5xx). 1.4.1. Broker ingress metrics You can use the following metrics to debug the broker ingress, see how it is performing, and see which events are being dispatched by the ingress component. Metric name Description Type Tags Unit event_count Number of events received by a broker. Counter broker_name , event_type , namespace_name , response_code , response_code_class , unique_name Integer (no units) event_dispatch_latencies The time taken to dispatch an event to a channel. Histogram broker_name , event_type , namespace_name , response_code , response_code_class , unique_name Milliseconds 1.4.2. Broker filter metrics You can use the following metrics to debug broker filters, see how they are performing, and see which events are being dispatched by the filters. You can also measure the latency of the filtering action on an event. Metric name Description Type Tags Unit event_count Number of events received by a broker. Counter broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Integer (no units) event_dispatch_latencies The time taken to dispatch an event to a channel. Histogram broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Milliseconds event_processing_latencies The time it takes to process an event before it is dispatched to a trigger subscriber. Histogram broker_name , container_name , filter_type , namespace_name , trigger_name , unique_name Milliseconds 1.4.3. InMemoryChannel dispatcher metrics You can use the following metrics to debug InMemoryChannel channels, see how they are performing, and see which events are being dispatched by the channels. Metric name Description Type Tags Unit event_count Number of events dispatched by InMemoryChannel channels. Counter broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Integer (no units) event_dispatch_latencies The time taken to dispatch an event from an InMemoryChannel channel. Histogram broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Milliseconds 1.4.4. Event source metrics You can use the following metrics to verify that events have been delivered from the event source to the connected event sink. Metric name Description Type Tags Unit event_count Number of events sent by the event source. Counter broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Integer (no units) retry_event_count Number of retried events sent by the event source after initially failing to be delivered. Counter event_source , event_type , name , namespace_name , resource_group , response_code , response_code_class , response_error , response_timeout Integer (no units) 1.4.5. Knative Kafka broker metrics You can use the following metrics to debug and visualize the performance of Kafka broker. Metric name Description Type Tags Unit event_count_1_total{job="kafka-broker-receiver-sm-service", namespace="knative-eventing"} Number of events received by a broker Counter name broker name namespace_name broker namespace event_type event type response_code HTTP response code returned by the broker response_code_class HTTP response code class returned by the broker: 2xx, 3xx, 4xx, 5xx Dimensionless event_dispatch_latencies_ms_bucket{job="kafka-broker-receiver-sm-service", namespace="knative-eventing"} The time spent dispatching an event to a Kafka cluster Histogram name broker name namespace_name broker namespace event_type event type response_code HTTP response code returned by the broker response_code_class HTTP response code class returned by the broker: 2xx, 3xx, 4xx, 5xx Milliseconds kafka_broker_controller_consumer_group_expected_replicas Number of expected replicas for a given Kafka consumer group resource Gauge consumer_name resource name namespace_name resource namespace consumer_kind resource Kind, enum: KafkaSource , Trigger , Subscription Note In this context, resources refer to user facing entities such as Kafka source, trigger, and subscription. Avoid using internal or generated names when using these resources. Dimensionless kafka_broker_controller_consumer_group_ready_replicas Number of ready replicas for a given Kafka consumer group resource Gauge consumer_name resource name namespace_name resource namespace consumer_kind resource Kind, enum: KafkaSource , Trigger , Subscription Note In this context, resources refer to user facing entities such as Kafka source, trigger, and subscription. Avoid using internal or generated names when using these resources. Dimensionless 1.4.6. Knative Kafka trigger metrics You can use the following metrics to debug and visualize the performance of Kafka triggers. Metric name Description Type Tags Unit event_count_1_total{job="kafka-broker-dispatcher-sm-service", namespace="knative-eventing"} Number of events dispatched by a trigger to a subscriber Counter consumer_name trigger name namespace_name trigger namespace name broker name event_type event type response_code HTTP response code returned by the trigger subscriber service response_code_class HTTP response code class returned by the trigger subscriber service: 2xx, 3xx, 4xx, 5xx Dimensionless event_dispatch_latencies_ms_bucket{job="kafka-broker-dispatcher-sm-service", namespace="knative-eventing"} The time spent dispatching an event to a subscriber Histogram consumer_name trigger name namespace_name trigger namespace name broker name event_type event type response_code HTTP response code returned by the trigger subscriber service response_code_class HTTP response code class returned by the trigger subscriber service: 2xx, 3xx, 4xx, 5xx Milliseconds event_processing_latencies_ms_bucket{job="kafka-broker-dispatcher-sm-service", namespace="knative-eventing"} The time spent processing and filtering an event Histogram consumer_name trigger name namespace_name trigger namespace name broker name event_type event type Milliseconds 1.4.7. Knative Kafka channel metrics You can use the following metrics to debug and visualize the performance of Kafka channel. Metric name Description Type Tags Unit event_count_1_total{job="kafka-channel-receiver-sm-service", namespace="knative-eventing"} Number of events received by a Kafka channel Counter name Kafka channel name namespace_name Kafka channel namespace event_type event type response_code HTTP response code returned by the Kafka channel response_code_class HTTP response code class returned by the Kafka channel: 2xx, 3xx, 4xx, 5xx Dimensionless event_dispatch_latencies_ms_bucket{job="kafka-channel-receiver-sm-service", namespace="knative-eventing"} The time spent dispatching an event to a Kafka cluster Histogram name Kafka channel name namespace_name Kafka channel namespace event_type event type response_code HTTP response code returned by the Kafka channel response_code_class HTTP response code class returned by the Kafka channel: 2xx, 3xx, 4xx, 5xx Milliseconds 1.4.8. Knative Kafka subscription metrics You can use the following metrics to debug and visualize the performance of subscriptions associated with the Kafka channel. Metric name Description Type Tags Unit event_count_1_total{job="kafka-channel-dispatcher-sm-service", namespace="knative-eventing"} Number of events dispatched by a subscription to a subscriber Counter consumer_name Subscription name namespace_name Subscription namespace name KafkaChannel name event_type event type response_code HTTP response code returned by the Subscription subscriber service response_code_class HTTP response code class returned by the Subscription subscriber service: 2xx, 3xx, 4xx, 5xx Dimensionless event_dispatch_latencies_ms_bucket{job="kafka-channel-dispatcher-sm-service", namespace="knative-eventing"} The time spent dispatching an event to a subscriber Histogram consumer_name Subscription name namespace_name Subscription namespace name KafkaChannel name event_type event type response_code HTTP response code returned by the Subscription subscriber service response_code_class HTTP response code class returned by the Subscription subscriber service: 2xx, 3xx, 4xx, 5xx Milliseconds event_processing_latencies_ms_bucket{job="kafka-channel-dispatcher-sm-service", namespace="knative-eventing"} The time spent processing an event Histogram consumer_name Subscription name namespace_name Subscription namespace name KafkaChannel name event_type event type Dimensionless 1.4.9. Knative Kafka source metrics You can use the following metrics to debug and visualize the performance of Kafka sources. Metric name Description Type Tags Unit event_count_1_total{job="kafka-source-dispatcher-sm-service", namespace="knative-eventing"} Number of events dispatched by a Kafka source Counter consumer_name Kafka source name namespace_name Kafka source namespace name Kafka source name event_type event type response_code HTTP response code returned by the Kafka source sink service response_code_class HTTP response code class returned by the Kafka source sink service: 2xx, 3xx, 4xx, 5xx Dimensionless event_dispatch_latencies_ms_bucket{job="kafka-source-dispatcher-sm-service", namespace="knative-eventing"} The time spent dispatching an event to a sink Histogram consumer_name Kafka source name namespace_name Kafka source namespace name Kafka source name event_type event type response_code HTTP response code returned by the Kafka source sink service response_code_class HTTP response code class returned by the Kafka source sink service: 2xx, 3xx, 4xx, 5xx Milliseconds event_processing_latencies_ms_bucket{job="kafka-source-dispatcher-sm-service", namespace="knative-eventing"} The time spent processing an event Histogram consumer_name Kafka source name namespace_name Kafka source namespace name Kafka source name event_type event type Milliseconds kafka_broker_controller_consumer_group_expected_replicas Number of expected replicas for a given Kafka consumer group resource Gauge consumer_name resource name namespace_name resource namespace consumer_kind resource Kind, enum: KafkaSource , Trigger , Subscription Note In this context, resources refer to user facing entities such as Kafka source,trigger, and subscription. Avoid using internal or generated names when using these resources. Dimensionless kafka_broker_controller_consumer_group_ready_replicas Number of ready replicas for a given Kafka consumer group resource Gauge consumer_name resource name namespace_name resource namespace consumer_kind resource Kind, enum: KafkaSource , Trigger , Subscription Note In this context, resources refer to user facing entities such as Kafka source,trigger, and subscription. Avoid using internal or generated names when using these resources. Dimensionless 1.4.10. Knative Kafka sink metrics You can use the following metrics to debug and visualize the performance of Kafka sinks. Metric name Description Type Tags Unit event_count_1_total{job="kafka-sink-receiver-sm-service", namespace="knative-eventing"} Number of events received by a broker Counter name Kafka sink name namespace_name Kafka sink namespace event_type event type response_code HTTP response code returned by the Kafka sink response_code_class HTTP response code class returned by the Kafka sink: 2xx, 3xx, 4xx, 5xx Dimensionless event_dispatch_latencies_ms_bucket{job="kafka-sink-receiver-sm-service", namespace="knative-eventing"} The time spent dispatching an event to a Kafka cluster Histogram name Kafka sink name namespace_name Kafka sink namespace event_type event type response_code HTTP response code returned by the Kafka sink response_code_class HTTP response code class returned by the Kafka sink: 2xx, 3xx, 4xx, 5xx Milliseconds 1.5. Knative Serving metrics Cluster administrators can view the following metrics for Knative Serving components. 1.5.1. Activator metrics You can use the following metrics to understand how applications respond when traffic passes through the activator. Metric name Description Type Tags Unit request_concurrency The number of concurrent requests that are routed to the activator, or average concurrency over a reporting period. Gauge configuration_name , container_name , namespace_name , pod_name , revision_name , service_name Integer (no units) request_count The number of requests that are routed to activator. These are requests that have been fulfilled from the activator handler. Counter configuration_name , container_name , namespace_name , pod_name , response_code , response_code_class , revision_name , service_name , Integer (no units) request_latencies The response time in milliseconds for a fulfilled, routed request. Histogram configuration_name , container_name , namespace_name , pod_name , response_code , response_code_class , revision_name , service_name Milliseconds 1.5.2. Autoscaler metrics The autoscaler component exposes a number of metrics related to autoscaler behavior for each revision. For example, at any given time, you can monitor the targeted number of pods the autoscaler tries to allocate for a service, the average number of requests per second during the stable window, or whether the autoscaler is in panic mode if you are using the Knative pod autoscaler (KPA). Metric name Description Type Tags Unit desired_pods The number of pods the autoscaler tries to allocate for a service. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) excess_burst_capacity The excess burst capacity served over the stable window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) stable_request_concurrency The average number of requests for each observed pod over the stable window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) panic_request_concurrency The average number of requests for each observed pod over the panic window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) target_concurrency_per_pod The number of concurrent requests that the autoscaler tries to send to each pod. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) stable_requests_per_second The average number of requests-per-second for each observed pod over the stable window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) panic_requests_per_second The average number of requests-per-second for each observed pod over the panic window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) target_requests_per_second The number of requests-per-second that the autoscaler targets for each pod. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) panic_mode This value is 1 if the autoscaler is in panic mode, or 0 if the autoscaler is not in panic mode. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) requested_pods The number of pods that the autoscaler has requested from the Kubernetes cluster. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) actual_pods The number of pods that are allocated and currently have a ready state. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) not_ready_pods The number of pods that have a not ready state. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) pending_pods The number of pods that are currently pending. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) terminating_pods The number of pods that are currently terminating. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) 1.5.3. Go runtime metrics Each Knative Serving control plane process emits a number of Go runtime memory statistics ( MemStats ). Note The name tag for each metric is an empty tag. Metric name Description Type Tags Unit go_alloc The number of bytes of allocated heap objects. This metric is the same as heap_alloc . Gauge name Integer (no units) go_total_alloc The cumulative bytes allocated for heap objects. Gauge name Integer (no units) go_sys The total bytes of memory obtained from the operating system. Gauge name Integer (no units) go_lookups The number of pointer lookups performed by the runtime. Gauge name Integer (no units) go_mallocs The cumulative count of heap objects allocated. Gauge name Integer (no units) go_frees The cumulative count of heap objects that have been freed. Gauge name Integer (no units) go_heap_alloc The number of bytes of allocated heap objects. Gauge name Integer (no units) go_heap_sys The number of bytes of heap memory obtained from the operating system. Gauge name Integer (no units) go_heap_idle The number of bytes in idle, unused spans. Gauge name Integer (no units) go_heap_in_use The number of bytes in spans that are currently in use. Gauge name Integer (no units) go_heap_released The number of bytes of physical memory returned to the operating system. Gauge name Integer (no units) go_heap_objects The number of allocated heap objects. Gauge name Integer (no units) go_stack_in_use The number of bytes in stack spans that are currently in use. Gauge name Integer (no units) go_stack_sys The number of bytes of stack memory obtained from the operating system. Gauge name Integer (no units) go_mspan_in_use The number of bytes of allocated mspan structures. Gauge name Integer (no units) go_mspan_sys The number of bytes of memory obtained from the operating system for mspan structures. Gauge name Integer (no units) go_mcache_in_use The number of bytes of allocated mcache structures. Gauge name Integer (no units) go_mcache_sys The number of bytes of memory obtained from the operating system for mcache structures. Gauge name Integer (no units) go_bucket_hash_sys The number of bytes of memory in profiling bucket hash tables. Gauge name Integer (no units) go_gc_sys The number of bytes of memory in garbage collection metadata. Gauge name Integer (no units) go_other_sys The number of bytes of memory in miscellaneous, off-heap runtime allocations. Gauge name Integer (no units) go_next_gc The target heap size of the garbage collection cycle. Gauge name Integer (no units) go_last_gc The time that the last garbage collection was completed in Epoch or Unix time . Gauge name Nanoseconds go_total_gc_pause_ns The cumulative time in garbage collection stop-the-world pauses since the program started. Gauge name Nanoseconds go_num_gc The number of completed garbage collection cycles. Gauge name Integer (no units) go_num_forced_gc The number of garbage collection cycles that were forced due to an application calling the garbage collection function. Gauge name Integer (no units) go_gc_cpu_fraction The fraction of the available CPU time of the program that has been used by the garbage collector since the program started. Gauge name Integer (no units)
null
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/observability/administrator-metrics
Chapter 17. Using the Red Hat Marketplace
Chapter 17. Using the Red Hat Marketplace The Red Hat Marketplace is an open cloud marketplace that makes it easy to discover and access certified software for container-based environments that run on public clouds and on-premises. 17.1. Red Hat Marketplace features Cluster administrators can use the Red Hat Marketplace to manage software on OpenShift Container Platform, give developers self-service access to deploy application instances, and correlate application usage against a quota. 17.1.1. Connect OpenShift Container Platform clusters to the Marketplace Cluster administrators can install a common set of applications on OpenShift Container Platform clusters that connect to the Marketplace. They can also use the Marketplace to track cluster usage against subscriptions or quotas. Users that they add by using the Marketplace have their product usage tracked and billed to their organization. During the cluster connection process , a Marketplace Operator is installed that updates the image registry secret, manages the catalog, and reports application usage. 17.1.2. Install applications Cluster administrators can install Marketplace applications from within OperatorHub in OpenShift Container Platform, or from the Marketplace web application . You can access installed applications from the web console by clicking Operators > Installed Operators . 17.1.3. Deploy applications from different perspectives You can deploy Marketplace applications from the web console's Administrator and Developer perspectives. The Developer perspective Developers can access newly installed capabilities by using the Developer perspective. For example, after a database Operator is installed, a developer can create an instance from the catalog within their project. Database usage is aggregated and reported to the cluster administrator. This perspective does not include Operator installation and application usage tracking. The Administrator perspective Cluster administrators can access Operator installation and application usage information from the Administrator perspective. They can also launch application instances by browsing custom resource definitions (CRDs) in the Installed Operators list.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/building_applications/red-hat-marketplace
Chapter 3. Prerequisites
Chapter 3. Prerequisites For the solution to work, the following requirements must be met. All nodes must have the same: number of CPUs and RAM software configuration RHEL release (please note that at least RHEL 8.6 is required) firewall settings SAP HANA release (SAP HANA 2.0 SPS04 or later) The pacemaker packages are only installed on the cluster nodes and must use the same version of resource-agents-sap-hana (0.162.1 or later). To be able to support SAP HANA Multitarget System Replication , refer to Add SAP HANA Multitarget System Replication autoregister support . Also, set the following: use register_secondaries_on_takeover=true use log_mode=normal The initial setup is based on the installation guide, Automating SAP HANA Scale-Up System Replication using the RHEL HA Add-On . The system replication configuration of all SAP HANA instances is based on SAP requirements. For more information, refer to the guidelines from SAP based on the SAP HANA Administration Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_sap_hana_scale-up_multitarget_system_replication_for_disaster_recovery/asmb_preconditions_v8-configuring-hana-scale-up-multitarget-system-replication-disaster-recovery
Chapter 2. Creating the required Alibaba Cloud resources
Chapter 2. Creating the required Alibaba Cloud resources Before you install OpenShift Container Platform, you must use the Alibaba Cloud console to create a Resource Access Management (RAM) user that has sufficient permissions to install OpenShift Container Platform into your Alibaba Cloud. This user must also have permissions to create new RAM users. You can also configure and use the ccoctl tool to create new credentials for the OpenShift Container Platform components with the permissions that they require. Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.1. Creating the required RAM user You must have a Alibaba Cloud Resource Access Management (RAM) user for the installation that has sufficient privileges. You can use the Alibaba Cloud Resource Access Management console to create a new user or modify an existing user. Later, you create credentials in OpenShift Container Platform based on this user's permissions. When you configure the RAM user, be sure to consider the following requirements: The user must have an Alibaba Cloud AccessKey ID and AccessKey secret pair. For a new user, you can select Open API Access for the Access Mode when creating the user. This mode generates the required AccessKey pair. For an existing user, you can add an AccessKey pair or you can obtain the AccessKey pair for that user. Note When created, the AccessKey secret is displayed only once. You must immediately save the AccessKey pair because the AccessKey pair is required for API calls. Add the AccessKey ID and secret to the ~/.alibabacloud/credentials file on your local computer. Alibaba Cloud automatically creates this file when you log in to the console. The Cloud Credential Operator (CCO) utility, ccoutil, uses these credentials when processing Credential Request objects. For example: [default] # Default client type = access_key # Certification type: access_key access_key_id = LTAI5t8cefXKmt # Key 1 access_key_secret = wYx56mszAN4Uunfh # Secret 1 Add your AccessKeyID and AccessKeySecret here. The RAM user must have the AdministratorAccess policy to ensure that the account has sufficient permission to create the OpenShift Container Platform cluster. This policy grants permissions to manage all Alibaba Cloud resources. When you attach the AdministratorAccess policy to a RAM user, you grant that user full access to all Alibaba Cloud services and resources. If you do not want to create a user with full access, create a custom policy with the following actions that you can add to your RAM user for installation. These actions are sufficient to install OpenShift Container Platform. Tip You can copy and paste the following JSON code into the Alibaba Cloud console to create a custom poicy. For information on creating custom policies, see Create a custom policy in the Alibaba Cloud documentation. Example 2.1. Example custom policy JSON file { "Version": "1", "Statement": [ { "Action": [ "tag:ListTagResources", "tag:UntagResources" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "vpc:DescribeVpcs", "vpc:DeleteVpc", "vpc:DescribeVSwitches", "vpc:DeleteVSwitch", "vpc:DescribeEipAddresses", "vpc:DescribeNatGateways", "vpc:ReleaseEipAddress", "vpc:DeleteNatGateway", "vpc:DescribeSnatTableEntries", "vpc:CreateSnatEntry", "vpc:AssociateEipAddress", "vpc:ListTagResources", "vpc:TagResources", "vpc:DescribeVSwitchAttributes", "vpc:CreateVSwitch", "vpc:CreateNatGateway", "vpc:DescribeRouteTableList", "vpc:CreateVpc", "vpc:AllocateEipAddress", "vpc:ListEnhanhcedNatGatewayAvailableZones" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "ecs:ModifyInstanceAttribute", "ecs:DescribeSecurityGroups", "ecs:DeleteSecurityGroup", "ecs:DescribeSecurityGroupReferences", "ecs:DescribeSecurityGroupAttribute", "ecs:RevokeSecurityGroup", "ecs:DescribeInstances", "ecs:DeleteInstances", "ecs:DescribeNetworkInterfaces", "ecs:DescribeInstanceRamRole", "ecs:DescribeUserData", "ecs:DescribeDisks", "ecs:ListTagResources", "ecs:AuthorizeSecurityGroup", "ecs:RunInstances", "ecs:TagResources", "ecs:ModifySecurityGroupPolicy", "ecs:CreateSecurityGroup", "ecs:DescribeAvailableResource", "ecs:DescribeRegions", "ecs:AttachInstanceRamRole" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "pvtz:DescribeRegions", "pvtz:DescribeZones", "pvtz:DeleteZone", "pvtz:DeleteZoneRecord", "pvtz:BindZoneVpc", "pvtz:DescribeZoneRecords", "pvtz:AddZoneRecord", "pvtz:SetZoneRecordStatus", "pvtz:DescribeZoneInfo", "pvtz:DescribeSyncEcsHostTask", "pvtz:AddZone" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "slb:DescribeLoadBalancers", "slb:SetLoadBalancerDeleteProtection", "slb:DeleteLoadBalancer", "slb:SetLoadBalancerModificationProtection", "slb:DescribeLoadBalancerAttribute", "slb:AddBackendServers", "slb:DescribeLoadBalancerTCPListenerAttribute", "slb:SetLoadBalancerTCPListenerAttribute", "slb:StartLoadBalancerListener", "slb:CreateLoadBalancerTCPListener", "slb:ListTagResources", "slb:TagResources", "slb:CreateLoadBalancer" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "ram:ListResourceGroups", "ram:DeleteResourceGroup", "ram:ListPolicyAttachments", "ram:DetachPolicy", "ram:GetResourceGroup", "ram:CreateResourceGroup", "ram:DeleteRole", "ram:GetPolicy", "ram:DeletePolicy", "ram:ListPoliciesForRole", "ram:CreateRole", "ram:AttachPolicyToRole", "ram:GetRole", "ram:CreatePolicy", "ram:CreateUser", "ram:DetachPolicyFromRole", "ram:CreatePolicyVersion", "ram:DetachPolicyFromUser", "ram:ListPoliciesForUser", "ram:AttachPolicyToUser", "ram:CreateUser", "ram:GetUser", "ram:DeleteUser", "ram:CreateAccessKey", "ram:ListAccessKeys", "ram:DeleteAccessKey", "ram:ListUsers", "ram:ListPolicyVersions" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "oss:DeleteBucket", "oss:DeleteBucketTagging", "oss:GetBucketTagging", "oss:GetBucketCors", "oss:GetBucketPolicy", "oss:GetBucketLifecycle", "oss:GetBucketReferer", "oss:GetBucketTransferAcceleration", "oss:GetBucketLog", "oss:GetBucketWebSite", "oss:GetBucketInfo", "oss:PutBucketTagging", "oss:PutBucket", "oss:OpenOssService", "oss:ListBuckets", "oss:GetService", "oss:PutBucketACL", "oss:GetBucketLogging", "oss:ListObjects", "oss:GetObject", "oss:PutObject", "oss:DeleteObject" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "alidns:DescribeDomainRecords", "alidns:DeleteDomainRecord", "alidns:DescribeDomains", "alidns:DescribeDomainRecordInfo", "alidns:AddDomainRecord", "alidns:SetDomainRecordStatus" ], "Resource": "*", "Effect": "Allow" }, { "Action": "bssapi:CreateInstance", "Resource": "*", "Effect": "Allow" }, { "Action": "ram:PassRole", "Resource": "*", "Effect": "Allow", "Condition": { "StringEquals": { "acs:Service": "ecs.aliyuncs.com" } } } ] } For more information about creating a RAM user and granting permissions, see Create a RAM user and Grant permissions to a RAM user in the Alibaba Cloud documentation. 2.2. Configuring the Cloud Credential Operator utility To assign RAM users and policies that provide long-lived RAM AccessKeys (AKs) for each in-cluster component, extract and prepare the Cloud Credential Operator (CCO) utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Preparing to update a cluster with manually maintained credentials 2.3. steps Install a cluster on Alibaba Cloud infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on Alibaba Cloud : You can install a cluster quickly by using the default configuration options. Installing a customized cluster on Alibaba Cloud : The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation .
[ "Default client type = access_key # Certification type: access_key access_key_id = LTAI5t8cefXKmt # Key 1 access_key_secret = wYx56mszAN4Uunfh # Secret", "{ \"Version\": \"1\", \"Statement\": [ { \"Action\": [ \"tag:ListTagResources\", \"tag:UntagResources\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"vpc:DescribeVpcs\", \"vpc:DeleteVpc\", \"vpc:DescribeVSwitches\", \"vpc:DeleteVSwitch\", \"vpc:DescribeEipAddresses\", \"vpc:DescribeNatGateways\", \"vpc:ReleaseEipAddress\", \"vpc:DeleteNatGateway\", \"vpc:DescribeSnatTableEntries\", \"vpc:CreateSnatEntry\", \"vpc:AssociateEipAddress\", \"vpc:ListTagResources\", \"vpc:TagResources\", \"vpc:DescribeVSwitchAttributes\", \"vpc:CreateVSwitch\", \"vpc:CreateNatGateway\", \"vpc:DescribeRouteTableList\", \"vpc:CreateVpc\", \"vpc:AllocateEipAddress\", \"vpc:ListEnhanhcedNatGatewayAvailableZones\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"ecs:ModifyInstanceAttribute\", \"ecs:DescribeSecurityGroups\", \"ecs:DeleteSecurityGroup\", \"ecs:DescribeSecurityGroupReferences\", \"ecs:DescribeSecurityGroupAttribute\", \"ecs:RevokeSecurityGroup\", \"ecs:DescribeInstances\", \"ecs:DeleteInstances\", \"ecs:DescribeNetworkInterfaces\", \"ecs:DescribeInstanceRamRole\", \"ecs:DescribeUserData\", \"ecs:DescribeDisks\", \"ecs:ListTagResources\", \"ecs:AuthorizeSecurityGroup\", \"ecs:RunInstances\", \"ecs:TagResources\", \"ecs:ModifySecurityGroupPolicy\", \"ecs:CreateSecurityGroup\", \"ecs:DescribeAvailableResource\", \"ecs:DescribeRegions\", \"ecs:AttachInstanceRamRole\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"pvtz:DescribeRegions\", \"pvtz:DescribeZones\", \"pvtz:DeleteZone\", \"pvtz:DeleteZoneRecord\", \"pvtz:BindZoneVpc\", \"pvtz:DescribeZoneRecords\", \"pvtz:AddZoneRecord\", \"pvtz:SetZoneRecordStatus\", \"pvtz:DescribeZoneInfo\", \"pvtz:DescribeSyncEcsHostTask\", \"pvtz:AddZone\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"slb:DescribeLoadBalancers\", \"slb:SetLoadBalancerDeleteProtection\", \"slb:DeleteLoadBalancer\", \"slb:SetLoadBalancerModificationProtection\", \"slb:DescribeLoadBalancerAttribute\", \"slb:AddBackendServers\", \"slb:DescribeLoadBalancerTCPListenerAttribute\", \"slb:SetLoadBalancerTCPListenerAttribute\", \"slb:StartLoadBalancerListener\", \"slb:CreateLoadBalancerTCPListener\", \"slb:ListTagResources\", \"slb:TagResources\", \"slb:CreateLoadBalancer\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"ram:ListResourceGroups\", \"ram:DeleteResourceGroup\", \"ram:ListPolicyAttachments\", \"ram:DetachPolicy\", \"ram:GetResourceGroup\", \"ram:CreateResourceGroup\", \"ram:DeleteRole\", \"ram:GetPolicy\", \"ram:DeletePolicy\", \"ram:ListPoliciesForRole\", \"ram:CreateRole\", \"ram:AttachPolicyToRole\", \"ram:GetRole\", \"ram:CreatePolicy\", \"ram:CreateUser\", \"ram:DetachPolicyFromRole\", \"ram:CreatePolicyVersion\", \"ram:DetachPolicyFromUser\", \"ram:ListPoliciesForUser\", \"ram:AttachPolicyToUser\", \"ram:CreateUser\", \"ram:GetUser\", \"ram:DeleteUser\", \"ram:CreateAccessKey\", \"ram:ListAccessKeys\", \"ram:DeleteAccessKey\", \"ram:ListUsers\", \"ram:ListPolicyVersions\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"oss:DeleteBucket\", \"oss:DeleteBucketTagging\", \"oss:GetBucketTagging\", \"oss:GetBucketCors\", \"oss:GetBucketPolicy\", \"oss:GetBucketLifecycle\", \"oss:GetBucketReferer\", \"oss:GetBucketTransferAcceleration\", \"oss:GetBucketLog\", \"oss:GetBucketWebSite\", \"oss:GetBucketInfo\", \"oss:PutBucketTagging\", \"oss:PutBucket\", \"oss:OpenOssService\", \"oss:ListBuckets\", \"oss:GetService\", \"oss:PutBucketACL\", \"oss:GetBucketLogging\", \"oss:ListObjects\", \"oss:GetObject\", \"oss:PutObject\", \"oss:DeleteObject\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"alidns:DescribeDomainRecords\", \"alidns:DeleteDomainRecord\", \"alidns:DescribeDomains\", \"alidns:DescribeDomainRecordInfo\", \"alidns:AddDomainRecord\", \"alidns:SetDomainRecordStatus\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": \"bssapi:CreateInstance\", \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": \"ram:PassRole\", \"Resource\": \"*\", \"Effect\": \"Allow\", \"Condition\": { \"StringEquals\": { \"acs:Service\": \"ecs.aliyuncs.com\" } } } ] }", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_alibaba/manually-creating-alibaba-ram
2.3. Multipath Device Attributes
2.3. Multipath Device Attributes In addition to the user_friendly_names and alias options, a multipath device has numerous attributes. You can modify these attributes for a specific multipath device by creating an entry for that device in the multipaths section of the multipath configuration file. For information on the multipaths section of the multipath configuration file, see Section 4.4, "Multipaths Device Configuration Attributes" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/multipath_device_attributes
Chapter 7. Configuring Data Grid to handle network partitions
Chapter 7. Configuring Data Grid to handle network partitions Data Grid clusters can split into network partitions in which subsets of nodes become isolated from each other. This condition results in loss of availability or consistency for clustered caches. Data Grid automatically detects crashed nodes and resolves conflicts to merge caches back together. 7.1. Split clusters and network partitions Network partitions are the result of error conditions in the running environment, such as when a network router crashes. When a cluster splits into partitions, nodes create a JGroups cluster view that includes only the nodes in that partition. This condition means that nodes in one partition can operate independently of nodes in the other partition. Detecting a split To automatically detect network partitions, Data Grid uses the FD_ALL protocol in the default JGroups stack to determine when nodes leave the cluster abruptly. Note Data Grid cannot detect what causes nodes to leave abruptly. This can happen not only when there is a network failure but also for other reasons, such as when Garbage Collection (GC) pauses the JVM. Data Grid suspects that nodes have crashed after the following number of milliseconds: When it detects that the cluster is split into network partitions, Data Grid uses a strategy for handling cache operations. Depending on your application requirements Data Grid can: Allow read and/or write operations for availability Deny read and write operations for consistency Merging partitions together To fix a split cluster, Data Grid merges the partitions back together. During the merge, Data Grid uses the .equals() method for values of cache entries to determine if any conflicts exist. To resolve any conflicts between replicas it finds on partitions, Data Grid uses a merge policy that you can configure. 7.1.1. Data consistency in a split cluster Network outages or errors that cause Data Grid clusters to split into partitions can result in data loss or consistency issues regardless of any handling strategy or merge policy. Between the split and detection If a write operation takes place on a node that is in a minor partition when a split occurs, and before Data Grid detects the split, that value is lost when Data Grid transfers state to that minor partition during the merge. In the event that all partitions are in the DEGRADED mode that value is not lost because no state transfer occurs but the entry can have an inconsistent value. For transactional caches write operations that are in progress when the split occurs can be committed on some nodes and rolled back on other nodes, which also results in inconsistent values. During the split and the time that Data Grid detects it, it is possible to get stale reads from a cache in a minor partition that has not yet entered DEGRADED mode. During the merge When Data Grid starts removing partitions nodes reconnect to the cluster with a series of merge events. Before this merge process completes it is possible that write operations on transactional caches succeed on some nodes but not others, which can potentially result in stale reads until the entries are updated. 7.2. Cache availability and degraded mode To preserve data consistency, Data Grid can put caches into DEGRADED mode if you configure them to use either the DENY_READ_WRITES or ALLOW_READS partition handling strategy. Data Grid puts caches in a partition into DEGRADED mode when the following conditions are true: At least one segment has lost all owners. This happens when a number of nodes equal to or greater than the number of owners for a distributed cache have left the cluster. There is not a majority of nodes in the partition. A majority of nodes is any number greater than half the total number of nodes in the cluster from the most recent stable topology, which was the last time a cluster rebalancing operation completed successfully. When caches are in DEGRADED mode, Data Grid: Allows read and write operations only if all replicas of an entry reside in the same partition. Denies read and write operations and throws an AvailabilityException if the partition does not include all replicas of an entry. Note With the ALLOW_READS strategy, Data Grid allows read operations on caches in DEGRADED mode. DEGRADED mode guarantees consistency by ensuring that write operations do not take place for the same key in different partitions. Additionally DEGRADED mode prevents stale read operations that happen when a key is updated in one partition but read in another partition. If all partitions are in DEGRADED mode then the cache becomes available again after merge only if the cluster contains a majority of nodes from the most recent stable topology and there is at least one replica of each entry. When the cluster has at least one replica of each entry, no keys are lost and Data Grid can create new replicas based on the number of owners during cluster rebalancing. In some cases a cache in one partition can remain available while entering DEGRADED mode in another partition. When this happens the available partition continues cache operations as normal and Data Grid attempts to rebalance data across those nodes. To merge the cache together Data Grid always transfers state from the available partition to the partition in DEGRADED mode. 7.2.1. Degraded cache recovery example This topic illustrates how Data Grid recovers from split clusters with caches that use the DENY_READ_WRITES partition handling strategy. As an example, a Data Grid cluster has four nodes and includes a distributed cache with two replicas for each entry ( owners=2 ). There are four entries in the cache, k1 , k2 , k3 and k4 . With the DENY_READ_WRITES strategy, if the cluster splits into partitions, Data Grid allows cache operations only if all replicas of an entry are in the same partition. In the following diagram, while the cache is split into partitions, Data Grid allows read and write operations for k1 on partition 1 and k4 on partition 2. Because there is only one replica for k2 and k3 on either partition 1 or partition 2, Data Grid denies read and write operations for those entries. When network conditions allow the nodes to re-join the same cluster view, Data Grid merges the partitions without state transfer and restores normal cache operations. 7.2.2. Verifying cache availability during network partitions Determine if caches on Data Grid clusters are in AVAILABLE mode or DEGRADED mode during a network partition. When Data Grid clusters split into partitions, nodes in those partitions can enter DEGRADED mode to guarantee data consistency. In DEGRADED mode clusters do not allow cache operations resulting in loss of availability. Procedure Verify availability of clustered caches in network partitions in one of the following ways: Check Data Grid logs for ISPN100011 messages that indicate if the cluster is available or if at least one cache is in DEGRADED mode. Get the availability of remote caches through the Data Grid Console or with the REST API. Open the Data Grid Console in any browser, select the Data Container tab, and then locate the availability status in the Health column. Retrieve cache health from the REST API. Programmatically retrieve the availability of embedded caches with the getAvailability() method in the AdvancedCache API. Additional resources REST API: Getting cluster health org.infinispan.AdvancedCache.getAvailability Enum AvailabilityMode 7.2.3. Making caches available Make caches available for read and write operations by forcing them out of DEGRADED mode. Important You should force clusters out of DEGRADED mode only if your deployment can tolerate data loss and inconsistency. Procedure Make caches available in one of the following ways: Open the Data Grid Console and select the Make available option. Change the availability of remote caches with the REST API. Programmatically change the availability of embedded caches with the AdvancedCache API. AdvancedCache ac = cache.getAdvancedCache(); // Retrieve cache availability boolean available = ac.getAvailability() == AvailabilityMode.AVAILABLE; // Make the cache available if (!available) { ac.setAvailability(AvailabilityMode.AVAILABLE); } Additional resources REST API: Setting cache availability org.infinispan.AdvancedCache 7.3. Configuring partition handling Configure Data Grid to use a partition handling strategy and merge policy so it can resolve split clusters when network issues occur. By default Data Grid uses a strategy that provides availability at the cost of lowering consistency guarantees for your data. When a cluster splits due to a network partition clients can continue to perform read and write operations on caches. If you require consistency over availability, you can configure Data Grid to deny read and write operations while the cluster is split into partitions. Alternatively you can allow read operations and deny write operations. You can also specify custom merge policy implementations that configure Data Grid to resolve splits with custom logic tailored to your requirements. Prerequisites Have a Data Grid cluster where you can create either a replicated or distributed cache. Note Partition handling configuration applies only to replicated and distributed caches. Procedure Open your Data Grid configuration for editing. Add partition handling configuration to your cache with either the partition-handling element or partitionHandling() method. Specify a strategy for Data Grid to use when the cluster splits into partitions with the when-split attribute or whenSplit() method. The default partition handling strategy is ALLOW_READ_WRITES so caches remain availabile. If your use case requires data consistency over cache availability, specify the DENY_READ_WRITES strategy. Specify a policy that Data Grid uses to resolve conflicting entries when merging partitions with the merge-policy attribute or mergePolicy() method. By default Data Grid does not resolve conflicts on merge. Save the changes to your Data Grid configuration. Partition handling configuration XML <distributed-cache> <partition-handling when-split="DENY_READ_WRITES" merge-policy="PREFERRED_ALWAYS"/> </distributed-cache> JSON { "distributed-cache": { "partition-handling" : { "when-split": "DENY_READ_WRITES", "merge-policy": "PREFERRED_ALWAYS" } } } YAML distributedCache: partitionHandling: whenSplit: DENY_READ_WRITES mergePolicy: PREFERRED_ALWAYS ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.clustering().cacheMode(CacheMode.DIST_SYNC) .partitionHandling() .whenSplit(PartitionHandling.DENY_READ_WRITES) .mergePolicy(MergePolicy.PREFERRED_NON_NULL); 7.4. Partition handling strategies Partition handling strategies control if Data Grid allows read and write operations when a cluster is split. The strategy you configure determines whether you get cache availability or data consistency. Table 7.1. Partition handling strategies Strategy Description Availability or consistency ALLOW_READ_WRITES Data Grid allows read and write operations on caches while a cluster is split into network partitions. Nodes in each partition remain available and function independently of each other. This is the default partition handling strategy. Availability DENY_READ_WRITES Data Grid allows read and write operations only if all replicas of an entry are in the partition. If a partition does not include all replicas of an entry, Data Grid prevents cache operations for that entry. Consistency ALLOW_READS Data Grid allows read operations for entries and prevents write operations unless the partition includes all replicas of an entry. Consistency with read availability 7.5. Merge policies Merge policies control how Data Grid resolves conflicts between replicas when bringing cluster partitions together. You can use one of the merge policies that Data Grid provides or you can create a custom implementation of the EntryMergePolicy API. Table 7.2. Data Grid merge policies Merge policy Description Considerations NONE Data Grid does not resolve conflicts when merging split clusters. This is the default merge policy. Nodes drop segments for which they are not the primary owner, which can result in data loss. PREFERRED_ALWAYS Data Grid finds the value that exists on the majority of nodes in the cluster and uses it to resolve conflicts. Data Grid could use stale values to resolve conflicts. Even if an entry is available the majority of nodes, the last update could happen on the minority partition. PREFERRED_NON_NULL Data Grid uses the first non-null value that it finds on the cluster to resolve conflicts. Data Grid could restore deleted entries. REMOVE_ALL Data Grid removes any conflicting entries from the cache. Results in loss of any entries that have different values when merging split clusters. 7.6. Configuring custom merge policies Configure Data Grid to use custom implementations of the EntryMergePolicy API when handling network partitions. Prerequisites Implement the EntryMergePolicy API. public class CustomMergePolicy implements EntryMergePolicy<String, String> { @Override public CacheEntry<String, String> merge(CacheEntry<String, String> preferredEntry, List<CacheEntry<String, String>> otherEntries) { // Decide which entry resolves the conflict return the_solved_CacheEntry; } Procedure Deploy your merge policy implementation to Data Grid Server if you use remote caches. Package your classes as a JAR file that includes a META-INF/services/org.infinispan.conflict.EntryMergePolicy file that contains the fully qualified class name of your merge policy. Add the JAR file to the server/lib directory. Tip Use the install command with the Data Grid Command Line Interface (CLI) to download the JAR to the server/lib directory. Open your Data Grid configuration for editing. Configure cache encoding with the encoding element or encoding() method as appropriate. For remote caches, if you use only object metadata for comparison when merging entries then you can use application/x-protostream as the media type. In this case Data Grid returns entries to the EntryMergePolicy as byte[] . If you require the object itself when merging conflicts then you should configure caches with the application/x-java-object media type. In this case you must deploy the relevant ProtoStream marshallers to Data Grid Server so it can perform byte[] to object transformations if clients use Protobuf encoding. Specify your custom merge policy with the merge-policy attribute or mergePolicy() method as part of the partition handling configuration. Save your changes. Custom merge policy configuration XML <distributed-cache name="mycache"> <partition-handling when-split="DENY_READ_WRITES" merge-policy="org.example.CustomMergePolicy"/> </distributed-cache> JSON { "distributed-cache": { "partition-handling" : { "when-split": "DENY_READ_WRITES", "merge-policy": "org.example.CustomMergePolicy" } } } YAML distributedCache: partitionHandling: whenSplit: DENY_READ_WRITES mergePolicy: org.example.CustomMergePolicy ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.clustering().cacheMode(CacheMode.DIST_SYNC) .partitionHandling() .whenSplit(PartitionHandling.DENY_READ_WRITES) .mergePolicy(new CustomMergePolicy()); Additional resources org.infinispan.conflict.EntryMergePolicy 7.7. Manually merging partitions in embedded caches Detect and resolve conflicting entries to manually merge embedded caches after network partitions occur. Procedure Retrieve the ConflictManager from the EmbeddedCacheManager to detect and resolve conflicting entries in a cache, as in the following example: EmbeddedCacheManager manager = new DefaultCacheManager("example-config.xml"); Cache<Integer, String> cache = manager.getCache("testCache"); ConflictManager<Integer, String> crm = ConflictManagerFactory.get(cache.getAdvancedCache()); // Get all versions of a key Map<Address, InternalCacheValue<String>> versions = crm.getAllVersions(1); // Process conflicts stream and perform some operation on the cache Stream<Map<Address, CacheEntry<Integer, String>>> conflicts = crm.getConflicts(); conflicts.forEach(map -> { CacheEntry<Integer, String> entry = map.values().iterator().(); Object conflictKey = entry.getKey(); cache.remove(conflictKey); }); // Detect and then resolve conflicts using the configured EntryMergePolicy crm.resolveConflicts(); // Detect and then resolve conflicts using the passed EntryMergePolicy instance crm.resolveConflicts((preferredEntry, otherEntries) -> preferredEntry); Note Although the ConflictManager::getConflicts stream is processed per entry, the underlying spliterator lazily loads cache entries on a per segment basis.
[ "FD_ALL[2|3].timeout + FD_ALL[2|3].interval + VERIFY_SUSPECT[2].timeout + GMS.view_ack_collection_timeout", "GET /rest/v2/container/health", "POST /rest/v2/caches/<cacheName>?action=set-availability&availability=AVAILABLE", "AdvancedCache ac = cache.getAdvancedCache(); // Retrieve cache availability boolean available = ac.getAvailability() == AvailabilityMode.AVAILABLE; // Make the cache available if (!available) { ac.setAvailability(AvailabilityMode.AVAILABLE); }", "<distributed-cache> <partition-handling when-split=\"DENY_READ_WRITES\" merge-policy=\"PREFERRED_ALWAYS\"/> </distributed-cache>", "{ \"distributed-cache\": { \"partition-handling\" : { \"when-split\": \"DENY_READ_WRITES\", \"merge-policy\": \"PREFERRED_ALWAYS\" } } }", "distributedCache: partitionHandling: whenSplit: DENY_READ_WRITES mergePolicy: PREFERRED_ALWAYS", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.clustering().cacheMode(CacheMode.DIST_SYNC) .partitionHandling() .whenSplit(PartitionHandling.DENY_READ_WRITES) .mergePolicy(MergePolicy.PREFERRED_NON_NULL);", "public class CustomMergePolicy implements EntryMergePolicy<String, String> { @Override public CacheEntry<String, String> merge(CacheEntry<String, String> preferredEntry, List<CacheEntry<String, String>> otherEntries) { // Decide which entry resolves the conflict return the_solved_CacheEntry; }", "List implementations of EntryMergePolicy with the full qualified class name org.example.CustomMergePolicy", "<distributed-cache name=\"mycache\"> <partition-handling when-split=\"DENY_READ_WRITES\" merge-policy=\"org.example.CustomMergePolicy\"/> </distributed-cache>", "{ \"distributed-cache\": { \"partition-handling\" : { \"when-split\": \"DENY_READ_WRITES\", \"merge-policy\": \"org.example.CustomMergePolicy\" } } }", "distributedCache: partitionHandling: whenSplit: DENY_READ_WRITES mergePolicy: org.example.CustomMergePolicy", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.clustering().cacheMode(CacheMode.DIST_SYNC) .partitionHandling() .whenSplit(PartitionHandling.DENY_READ_WRITES) .mergePolicy(new CustomMergePolicy());", "EmbeddedCacheManager manager = new DefaultCacheManager(\"example-config.xml\"); Cache<Integer, String> cache = manager.getCache(\"testCache\"); ConflictManager<Integer, String> crm = ConflictManagerFactory.get(cache.getAdvancedCache()); // Get all versions of a key Map<Address, InternalCacheValue<String>> versions = crm.getAllVersions(1); // Process conflicts stream and perform some operation on the cache Stream<Map<Address, CacheEntry<Integer, String>>> conflicts = crm.getConflicts(); conflicts.forEach(map -> { CacheEntry<Integer, String> entry = map.values().iterator().next(); Object conflictKey = entry.getKey(); cache.remove(conflictKey); }); // Detect and then resolve conflicts using the configured EntryMergePolicy crm.resolveConflicts(); // Detect and then resolve conflicts using the passed EntryMergePolicy instance crm.resolveConflicts((preferredEntry, otherEntries) -> preferredEntry);" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/configuring_data_grid_caches/partition-handling
Chapter 1. Preparing to install with the Agent-based Installer
Chapter 1. Preparing to install with the Agent-based Installer 1.1. About the Agent-based Installer The Agent-based installation method provides the flexibility to boot your on-premises servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. Agent-based installation is a subcommand of the OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an OpenShift Container Platform cluster, with an available release image. The configuration is in the same format as for the installer-provisioned infrastructure and user-provisioned infrastructure installation methods. The Agent-based Installer can also optionally generate or accept Zero Touch Provisioning (ZTP) custom resources. ZTP allows you to provision new edge sites with declarative configurations of bare-metal equipment. Table 1.1. Agent-based Installer supported architectures CPU architecture Connected installation Disconnected installation 64-bit x86 [✓] [✓] 64-bit ARM [✓] [✓] ppc64le [✓] [✓] s390x [✓] [✓] 1.2. Understanding Agent-based Installer As an OpenShift Container Platform user, you can leverage the advantages of the Assisted Installer hosted service in disconnected environments. The Agent-based installation comprises a bootable ISO that contains the Assisted discovery agent and the Assisted Service. Both are required to perform the cluster installation, but the latter runs on only one of the hosts. Note Currently, ISO boot support on IBM Z(R) ( s390x ) is available only for Red Hat Enterprise Linux (RHEL) KVM, which provides the flexibility to choose either PXE or ISO-based installation. For installations with z/VM and Logical Partition (LPAR), only PXE boot is supported. The openshift-install agent create image subcommand generates an ephemeral ISO based on the inputs that you provide. You can choose to provide inputs through the following manifests: Preferred: install-config.yaml agent-config.yaml Optional: ZTP manifests cluster-manifests/cluster-deployment.yaml cluster-manifests/agent-cluster-install.yaml cluster-manifests/pull-secret.yaml cluster-manifests/infraenv.yaml cluster-manifests/cluster-image-set.yaml cluster-manifests/nmstateconfig.yaml mirror/registries.conf mirror/ca-bundle.crt 1.2.1. Agent-based Installer workflow One of the control plane hosts runs the Assisted Service at the start of the boot process and eventually becomes the bootstrap host. This node is called the rendezvous host (node 0). The Assisted Service ensures that all the hosts meet the requirements and triggers an OpenShift Container Platform cluster deployment. All the nodes have the Red Hat Enterprise Linux CoreOS (RHCOS) image written to the disk. The non-bootstrap nodes reboot and initiate a cluster deployment. Once the nodes are rebooted, the rendezvous host reboots and joins the cluster. The bootstrapping is complete and the cluster is deployed. Figure 1.1. Node installation workflow You can install a disconnected OpenShift Container Platform cluster through the openshift-install agent create image subcommand for the following topologies: A single-node OpenShift Container Platform cluster (SNO) : A node that is both a master and worker. A three-node OpenShift Container Platform cluster : A compact cluster that has three master nodes that are also worker nodes. Highly available OpenShift Container Platform cluster (HA) : Three master nodes with any number of worker nodes. 1.2.2. Recommended resources for topologies Recommended cluster resources for the following topologies: Table 1.2. Recommended cluster resources Topology Number of control plane nodes Number of compute nodes vCPU Memory Storage Single-node cluster 1 0 8 vCPUs 16 GB of RAM 120 GB Compact cluster 3 0 or 1 8 vCPUs 16 GB of RAM 120 GB HA cluster 3 2 and above 8 vCPUs 16 GB of RAM 120 GB In the install-config.yaml , specify the platform on which to perform the installation. The following platforms are supported: baremetal vsphere external none Important For platform none : The none option requires the provision of DNS name resolution and load balancing infrastructure in your cluster. See Requirements for a cluster using the platform "none" option in the "Additional resources" section for more information. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. Additional resources Requirements for a cluster using the platform "none" option Increase the network MTU Adding worker nodes to single-node OpenShift clusters 1.3. About FIPS compliance For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization's corporate governance framework. Federal Information Processing Standards (FIPS) compliance is one of the most critical components required in highly secure environments to ensure that only supported cryptographic technologies are allowed on nodes. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 1.4. Configuring FIPS through the Agent-based Installer During a cluster deployment, the Federal Information Processing Standards (FIPS) change is applied when the Red Hat Enterprise Linux CoreOS (RHCOS) machines are deployed in your cluster. For Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines. You can enable FIPS mode through the preferred method of install-config.yaml and agent-config.yaml : You must set value of the fips field to True in the install-config.yaml file: Sample install-config.yaml.file apiVersion: v1 baseDomain: test.example.com metadata: name: sno-cluster fips: True Optional: If you are using the GitOps ZTP manifests, you must set the value of fips as True in the Agent-install.openshift.io/install-config-overrides field in the agent-cluster-install.yaml file: Sample agent-cluster-install.yaml file apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: annotations: agent-install.openshift.io/install-config-overrides: '{"fips": True}' name: sno-cluster namespace: sno-cluster-test Additional resources OpenShift Security Guide Book Support for FIPS cryptography 1.5. Host configuration You can make additional configurations for each host on the cluster in the agent-config.yaml file, such as network configurations and root device hints. Important For each host you configure, you must provide the MAC address of an interface on the host to specify which host you are configuring. 1.5.1. Host roles Each host in the cluster is assigned a role of either master or worker . You can define the role for each host in the agent-config.yaml file by using the role parameter. If you do not assign a role to the hosts, the roles will be assigned at random during installation. It is recommended to explicitly define roles for your hosts. The rendezvousIP must be assigned to a host with the master role. This can be done manually or by allowing the Agent-based Installer to assign the role. Important You do not need to explicitly define the master role for the rendezvous host, however you cannot create configurations that conflict with this assignment. For example, if you have 4 hosts with 3 of the hosts explicitly defined to have the master role, the last host that is automatically assigned the worker role during installation cannot be configured as the rendezvous host. Sample agent-config.yaml file apiVersion: v1beta1 kind: AgentConfig metadata: name: example-cluster rendezvousIP: 192.168.111.80 hosts: - hostname: master-1 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 - hostname: master-2 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a6 - hostname: master-3 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a7 - hostname: worker-1 role: worker interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a8 1.5.2. About root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 1.3. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. If you use the udevadm command to retrieve the wwn value, and the command outputs a value for ID_WWN_WITH_EXTENSION , then you must use this value to specify the wwn subfield. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master rootDeviceHints: deviceName: "/dev/sda" 1.6. About networking The rendezvous IP must be known at the time of generating the agent ISO, so that during the initial boot all the hosts can check in to the assisted service. If the IP addresses are assigned using a Dynamic Host Configuration Protocol (DHCP) server, then the rendezvousIP field must be set to an IP address of one of the hosts that will become part of the deployed control plane. In an environment without a DHCP server, you can define IP addresses statically. In addition to static IP addresses, you can apply any network configuration that is in NMState format. This includes VLANs and NIC bonds. 1.6.1. DHCP Preferred method: install-config.yaml and agent-config.yaml You must specify the value for the rendezvousIP field. The networkConfig fields can be left blank: Sample agent-config.yaml.file apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 1 The IP address for the rendezvous host. 1.6.2. Static networking Preferred method: install-config.yaml and agent-config.yaml Sample agent-config.yaml.file cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: - hostname: master-0 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 2 networkConfig: interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 3 prefix-length: 23 4 dhcp: false dns-resolver: config: server: - 192.168.111.1 5 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.111.1 6 -hop-interface: eno1 table-id: 254 EOF 1 If a value is not specified for the rendezvousIP field, one address will be chosen from the static IP addresses specified in the networkConfig fields. 2 The MAC address of an interface on the host, used to determine which host to apply the configuration to. 3 The static IP address of the target bare metal host. 4 The static IP address's subnet prefix for the target bare metal host. 5 The DNS server for the target bare metal host. 6 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. Optional method: GitOps ZTP manifests The optional method of the GitOps ZTP custom resources comprises 6 custom resources; you can configure static IPs in the nmstateconfig.yaml file. apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 1 prefix-length: 23 2 dhcp: false dns-resolver: config: server: - 192.168.122.1 3 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.122.1 4 -hop-interface: eth0 table-id: 254 interfaces: - name: eth0 macAddress: 52:54:01:aa:aa:a1 5 1 The static IP address of the target bare metal host. 2 The static IP address's subnet prefix for the target bare metal host. 3 The DNS server for the target bare metal host. 4 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. 5 The MAC address of an interface on the host, used to determine which host to apply the configuration to. The rendezvous IP is chosen from the static IP addresses specified in the config fields. 1.7. Requirements for a cluster using the platform "none" option This section describes the requirements for an Agent-based OpenShift Container Platform installation that is configured to use the platform none option. Important Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. 1.7.1. Platform "none" DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The control plane and compute machines Reverse DNS resolution is also required for the Kubernetes API, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. The following DNS records are required for an OpenShift Container Platform cluster using the platform none option and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.4. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. 1.7.1.1. Example DNS configuration for platform "none" clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform using the platform none option. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a platform "none" cluster The following example is a BIND zone file that shows sample A records for name resolution in a cluster using the platform none option. Example 1.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; master0.ocp4.example.com. IN A 192.168.1.97 4 master1.ocp4.example.com. IN A 192.168.1.98 5 master2.ocp4.example.com. IN A 192.168.1.99 6 ; worker0.ocp4.example.com. IN A 192.168.1.11 7 worker1.ocp4.example.com. IN A 192.168.1.7 8 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 5 6 Provides name resolution for the control plane machines. 7 8 Provides name resolution for the compute machines. Example DNS PTR record configuration for a platform "none" cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a cluster using the platform none option. Example 1.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 3 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 4 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 5 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 6 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 7 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 4 5 Provides reverse DNS resolution for the control plane machines. 6 7 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 1.7.2. Platform "none" Load balancing requirements Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note These requirements do not apply to single-node OpenShift clusters using the platform none option. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configure the following ports on both the front and back of the load balancers: Table 1.5. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 1.6. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 1.7.2.1. Example load balancer configuration for platform "none" clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters using the platform none option. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 1.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 2 bind *:22623 mode tcp server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 3 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 4 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 Port 22623 handles the machine config server traffic and points to the control plane machines. 3 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 4 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 1.8. Example: Bonds and VLAN interface node network configuration The following agent-config.yaml file is an example of a manifest for bond and VLAN interfaces. apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: master0 role: master interfaces: - name: enp0s4 macAddress: 00:21:50:90:c0:10 - name: enp0s5 macAddress: 00:21:50:90:c0:20 networkConfig: interfaces: - name: bond0.300 1 type: vlan 2 state: up vlan: base-iface: bond0 id: 300 ipv4: enabled: true address: - ip: 10.10.10.14 prefix-length: 24 dhcp: false - name: bond0 3 type: bond 4 state: up mac-address: 00:21:50:90:c0:10 5 ipv4: enabled: false ipv6: enabled: false link-aggregation: mode: active-backup 6 options: miimon: "150" 7 port: - enp0s4 - enp0s5 dns-resolver: 8 config: server: - 10.10.10.11 - 10.10.10.12 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.10.10.10 9 -hop-interface: bond0.300 10 table-id: 254 1 3 Name of the interface. 2 The type of interface. This example creates a VLAN. 4 The type of interface. This example creates a bond. 5 The mac address of the interface. 6 The mode attribute specifies the bonding mode. 7 Specifies the MII link monitoring frequency in milliseconds. This example inspects the bond link every 150 milliseconds. 8 Optional: Specifies the search and server settings for the DNS server. 9 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. 10 hop interface for the node traffic. 1.9. Example: Bonds and SR-IOV dual-nic node network configuration The following agent-config.yaml file is an example of a manifest for dual port NIC with a bond and SR-IOV interfaces: apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: worker-1 interfaces: - name: eno1 macAddress: 0c:42:a1:55:f3:06 - name: eno2 macAddress: 0c:42:a1:55:f3:07 networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false dhcp: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.19.17.254 -hop-interface: bond0 14 table-id: 254 1 The networkConfig field contains information about the network configuration of the host, with subfields including interfaces , dns-resolver , and routes . 2 The interfaces field is an array of network interfaces defined for the host. 3 The name of the interface. 4 The type of interface. This example creates an ethernet interface. 5 Set this to false to disable DHCP for the physical function (PF) if it is not strictly required. 6 Set this to the number of SR-IOV virtual functions (VFs) to instantiate. 7 Set this to up . 8 Set this to false to disable IPv4 addressing for the VF attached to the bond. 9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps. This value must be less than or equal to the maximum transmission rate. Intel NICs do not support the min-tx-rate parameter. For more information, see BZ#1772847 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. 11 Sets the desired bond mode. 12 Sets the preferred port of the bonding interface. The primary device is the first of the bonding interfaces to be used and is not abandoned unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). 13 Sets a static IP address for the bond interface. This is the node IP address. 14 Sets bond0 as the gateway for the default route. Additional resources Configuring network bonding 1.10. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - name: worker replicas: 0 3 architecture: amd64 controlPlane: 4 name: master replicas: 1 5 architecture: amd64 metadata: name: sno-cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 8 networkType: OVNKubernetes 9 serviceNetwork: 10 - 172.30.0.0/16 platform: none: {} 11 fips: false 12 pullSecret: '{"auths": ...}' 13 sshKey: 'ssh-ed25519 AAAA...' 14 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 This parameter controls the number of compute machines that the Agent-based installation waits to discover before triggering the installation process. It is the number of compute machines that must be booted with the generated ISO. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 5 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 8 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 10 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 11 You must set the platform to none for a single-node cluster. You can set the platform to vsphere , baremetal , or none for multi-node clusters. Note If you set the platform to vsphere or baremetal , you can configure IP address endpoints for cluster nodes in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) Example of dual-stack networking networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5 12 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 13 This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 14 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 1.11. Validation checks before agent ISO creation The Agent-based Installer performs validation checks on user defined YAML files before the ISO is created. Once the validations are successful, the agent ISO is created. install-config.yaml baremetal , vsphere and none platforms are supported. The networkType parameter must be OVNKubernetes in the case of none platform. apiVIPs and ingressVIPs parameters must be set for bare metal and vSphere platforms. Some host-specific fields in the bare metal platform configuration that have equivalents in agent-config.yaml file are ignored. A warning message is logged if these fields are set. agent-config.yaml Each interface must have a defined MAC address. Additionally, all interfaces must have a different MAC address. At least one interface must be defined for each host. World Wide Name (WWN) vendor extensions are not supported in root device hints. The role parameter in the host object must have a value of either master or worker . 1.11.1. ZTP manifests agent-cluster-install.yaml For IPv6, the only supported value for the networkType parameter is OVNKubernetes . The OpenshiftSDN value can be used only for IPv4. cluster-image-set.yaml The ReleaseImage parameter must match the release defined in the installer. 1.12. steps Installing a cluster Installing a cluster with customizations
[ "apiVersion: v1 baseDomain: test.example.com metadata: name: sno-cluster fips: True", "apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: annotations: agent-install.openshift.io/install-config-overrides: '{\"fips\": True}' name: sno-cluster namespace: sno-cluster-test", "apiVersion: v1beta1 kind: AgentConfig metadata: name: example-cluster rendezvousIP: 192.168.111.80 hosts: - hostname: master-1 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 - hostname: master-2 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a6 - hostname: master-3 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a7 - hostname: worker-1 role: worker interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a8", "- name: master-0 role: master rootDeviceHints: deviceName: \"/dev/sda\"", "apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1", "cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: - hostname: master-0 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 2 networkConfig: interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 3 prefix-length: 23 4 dhcp: false dns-resolver: config: server: - 192.168.111.1 5 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.1 6 next-hop-interface: eno1 table-id: 254 EOF", "apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 1 prefix-length: 23 2 dhcp: false dns-resolver: config: server: - 192.168.122.1 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.122.1 4 next-hop-interface: eth0 table-id: 254 interfaces: - name: eth0 macAddress: 52:54:01:aa:aa:a1 5", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; master0.ocp4.example.com. IN A 192.168.1.97 4 master1.ocp4.example.com. IN A 192.168.1.98 5 master2.ocp4.example.com. IN A 192.168.1.99 6 ; worker0.ocp4.example.com. IN A 192.168.1.11 7 worker1.ocp4.example.com. IN A 192.168.1.7 8 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 3 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 4 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 5 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 6 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 7 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 2 bind *:22623 mode tcp server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 3 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 4 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: master0 role: master interfaces: - name: enp0s4 macAddress: 00:21:50:90:c0:10 - name: enp0s5 macAddress: 00:21:50:90:c0:20 networkConfig: interfaces: - name: bond0.300 1 type: vlan 2 state: up vlan: base-iface: bond0 id: 300 ipv4: enabled: true address: - ip: 10.10.10.14 prefix-length: 24 dhcp: false - name: bond0 3 type: bond 4 state: up mac-address: 00:21:50:90:c0:10 5 ipv4: enabled: false ipv6: enabled: false link-aggregation: mode: active-backup 6 options: miimon: \"150\" 7 port: - enp0s4 - enp0s5 dns-resolver: 8 config: server: - 10.10.10.11 - 10.10.10.12 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.10.10.10 9 next-hop-interface: bond0.300 10 table-id: 254", "apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: worker-1 interfaces: - name: eno1 macAddress: 0c:42:a1:55:f3:06 - name: eno2 macAddress: 0c:42:a1:55:f3:07 networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false dhcp: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - name: worker replicas: 0 3 architecture: amd64 controlPlane: 4 name: master replicas: 1 5 architecture: amd64 metadata: name: sno-cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 8 networkType: OVNKubernetes 9 serviceNetwork: 10 - 172.30.0.0/16 platform: none: {} 11 fips: false 12 pullSecret: '{\"auths\": ...}' 13 sshKey: 'ssh-ed25519 AAAA...' 14", "networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_an_on-premise_cluster_with_the_agent-based_installer/preparing-to-install-with-agent-based-installer
About OpenShift Pipelines
About OpenShift Pipelines Red Hat OpenShift Pipelines 1.15 Introduction to OpenShift Pipelines Red Hat OpenShift Documentation Team
[ "apiVersion: tekton.dev/v1 kind: Task metadata: name: test-task spec: steps: - name: fetch-repository stepRef: resolver: git params: - name: url value: https://github.com/tektoncd/catalog.git - name: revision value: main - name: pathInRepo value: stepaction/git-clone/0.1/git-clone params: - name: url value: USD(params.repo-url) - name: revision value: USD(params.tag-name) - name: output-path value: USD(workspaces.output.path)", "apiVersion: tekton.dev/v1 kind: Task metadata: generateName: something- spec: params: - name: myWorkspaceSecret steps: - image: registry.redhat.io/ubi/ubi8-minimal:latest script: | echo \"Hello World\" workspaces: - name: myworkspace secret: secretName: USD(params.myWorkspaceSecret)", "apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: options: configMaps: config-defaults: data: default-imagepullbackoff-timeout: \"5m\"", "apiVersion: triggers.tekton.dev/v1beta1 kind: TriggerTemplate metadata: name: create-configmap-template spec: params: - name: action resourcetemplates: - apiVersion: v1 kind: ConfigMap metadata: generateName: sample- data: field: \"Action is : USD(tt.params.action)\"", "apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: simple-eventlistener spec: serviceAccountName: simple-tekton-robot triggers: - name: simple-trigger bindings: - ref: simple-binding template: ref: simple-template resources: kubernetesResource: serviceType: NodePort servicePort: 38080", "apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: listener-loadbalancerclass spec: serviceAccountName: tekton-triggers-example-sa triggers: - name: example-trig bindings: - ref: pipeline-binding - ref: message-binding template: ref: pipeline-template resources: kubernetesResource: serviceType: LoadBalancer serviceLoadBalancerClass: private", "/test pipelinerun1 revision=main param1=\"value1\" param2=\"value \\\"value2\\\" with quotes\"", "/test checker target_branch=backport-branch", "apiVersion: operator.tekton.dev/v1 kind: TektonResult metadata: name: result spec: options: deployments: tekton-results-watcher: spec: template: spec: containers: - name: watcher args: - \"--updateLogTimeout=60s\"", "oc get tektoninstallersets", "oc delete tektoninstallerset <installerset_name>", "apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pr-v1 spec: pipelineSpec: tasks: - name: noop-task taskSpec: steps: - name: noop-task image: registry.access.redhat.com/ubi9/ubi-micro script: | exit 0 taskRunTemplate: podTemplate: securityContext: runAsNonRoot: true runAsUser: 1001", "apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: pr-v1beta1 spec: pipelineSpec: tasks: - name: noop-task taskSpec: steps: - name: noop-task image: registry.access.redhat.com/ubi9/ubi-micro script: | exit 0 podTemplate: securityContext: runAsNonRoot: true runAsUser: 1001", "apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: remote-task-reference spec: taskRef: resolver: http params: - name: url value: https://raw.githubusercontent.com/tektoncd-catalog/git-clone/main/task/git-clone/git-clone.yaml", "apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: http-demo spec: pipelineRef: resolver: http params: - name: url value: https://raw.githubusercontent.com/tektoncd/catalog/main/pipeline/build-push-gke-deploy/0.1/build-push-gke-deploy.yaml", "apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: pipeline-param-enum spec: params: - name: message enum: [\"v1\", \"v2\"] default: \"v1\"", "apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: git-api-demo-tr spec: taskRef: resolver: git params: - name: org value: tektoncd - name: repo value: catalog - name: revision value: main - name: pathInRepo value: task/git-clone/0.6/git-clone.yaml # create the my-secret-token secret in the namespace where the # pipelinerun is created. The secret must contain a GitHub personal access # token in the token key of the secret. - name: token value: my-secret-token - name: tokenKey value: token - name: scmType value: github - name: serverURL value: https://ghe.mycompany.com", "\".translate(\"[^a-z0-9]+\", \"ABC\")", "This is USDan Invalid5String", "ABChisABCisABCanABCnvalid5ABCtring", "\"data_type==TASK_RUN && (data.spec.pipelineSpec.tasks[0].name=='hello'||data.metadata.name=='hello')\"", "apiVersion: tekton.dev/v1 kind: Task metadata: name: uid-task spec: results: - name: uid steps: - name: uid image: alpine command: [\"/bin/sh\", \"-c\"] args: - echo \"1001\" | tee USD(results.uid.path) --- apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: uid-pipeline-run spec: pipelineSpec: tasks: - name: add-uid taskRef: name: uid-task - name: show-uid taskSpec: steps: - name: show-uid image: alpine command: [\"/bin/sh\", \"-c\"] args: - echo USD(tasks.add-uid.results.uid)", "apiVersion: tekton.dev/v1beta1 kind: PipelineRun spec: params: - name: source_url value: \"{{ source_url }}\" pipelineSpec: params: - name: source_url", "oc get tektoninstallersets.operator.tekton.dev | awk '/pipeline-main-static/ {print USD1}' | xargs oc delete tektoninstallersets", "oc patch tektonconfig config --type=\"merge\" -p '{\"spec\": {\"platforms\": {\"openshift\":{\"pipelinesAsCode\": {\"enable\": false}}}}}'", "apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: enable-bundles-resolver: true enable-cluster-resolver: true enable-git-resolver: true enable-hub-resolver: true", "apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: bundles-resolver-config: default-service-account: pipelines cluster-resolver-config: default-namespace: test git-resolver-config: server-url: localhost.com hub-resolver-config: default-tekton-hub-catalog: tekton", "annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == \"pull_request\" && \"docs/*.md\".pathChanged()", "yaml kind: PipelineRun spec: timeouts: pipeline: \"0\" # No timeout tasks: \"0h3m0s\"", "- name: IMAGE_NAME value: 'image-registry.openshift-image-registry.svc:5000/<test_namespace>/<test_pipelinerun>'", "- name: IMAGE_NAME value: 'image-registry.openshift-image-registry.svc:5000/{{ target_namespace }}/USD(context.pipelineRun.name)'", "kind: Task apiVersion: tekton.dev/v1beta1 metadata: name: write-array annotations: description: | A simple task that writes array spec: results: - name: array-results type: array description: The array results", "echo -n \"[\\\"hello\\\",\\\"world\\\"]\" | tee USD(results.array-results.path)", "apiVersion: v1 kind: Secret metadata: name: tekton-hub-db labels: app: tekton-hub-db type: Opaque stringData: POSTGRES_HOST: <hostname> POSTGRES_DB: <database_name> POSTGRES_USER: <username> POSTGRES_PASSWORD: <password> POSTGRES_PORT: <listening_port_number>", "annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == \"pull_request\" && target_branch == \"main\" && source_branch == \"wip\"", "apiVersion: v1 kind: ConfigMap metadata: name: config-observability namespace: tekton-pipelines labels: app.kubernetes.io/instance: default app.kubernetes.io/part-of: tekton-pipelines data: _example: | metrics.taskrun.level: \"task\" metrics.taskrun.duration-type: \"histogram\" metrics.pipelinerun.level: \"pipeline\" metrics.pipelinerun.duration-type: \"histogram\"", "oc get route -n openshift-pipelines pipelines-as-code-controller --template='https://{{ .spec.host }}'", "error updating rolebinding openshift-operators-prometheus-k8s-read-binding: RoleBinding.rbac.authorization.k8s.io \"openshift-operators-prometheus-k8s-read-binding\" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:\"rbac.authorization.k8s.io\", Kind:\"Role\", Name:\"openshift-operator-read\"}: cannot change roleRef", "Error: error writing \"0 0 4294967295\\n\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted time=\"2022-03-04T09:47:57Z\" level=error msg=\"error writing \\\"0 0 4294967295\\\\n\\\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted\" time=\"2022-03-04T09:47:57Z\" level=error msg=\"(unable to determine exit status)\"", "securityContext: capabilities: add: [\"SETFCAP\"]", "oc get tektoninstallerset NAME READY REASON addon-clustertasks-nx5xz False Error addon-communityclustertasks-cfb2p True addon-consolecli-ftrb8 True addon-openshift-67dj2 True addon-pac-cf7pz True addon-pipelines-fvllm True addon-triggers-b2wtt True addon-versioned-clustertasks-1-8-hqhnw False Error pipeline-w75ww True postpipeline-lrs22 True prepipeline-ldlhw True rhosp-rbac-4dmgb True trigger-hfg64 True validating-mutating-webhoook-28rf7 True", "oc get tektonconfig config NAME VERSION READY REASON config 1.8.1 True", "tkn pipeline export test_pipeline -n openshift-pipelines", "tkn pipelinerun export test_pipeline_run -n openshift-pipelines", "spec: profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: \"true\" - name: pipelineTemplates value: \"true\" - name: communityClusterTasks value: \"false\"", "hub: params: - name: enable-devconsole-integration value: \"true\"", "STEP 7: RUN /usr/libexec/s2i/assemble /bin/sh: /usr/libexec/s2i/assemble: No such file or directory subprocess exited with status 127 subprocess exited with status 127 error building at STEP \"RUN /usr/libexec/s2i/assemble\": exit status 127 time=\"2021-11-04T13:05:26Z\" level=error msg=\"exit status 127\"", "error updating rolebinding openshift-operators-prometheus-k8s-read-binding: RoleBinding.rbac.authorization.k8s.io \"openshift-operators-prometheus-k8s-read-binding\" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:\"rbac.authorization.k8s.io\", Kind:\"Role\", Name:\"openshift-operator-read\"}: cannot change roleRef", "Error: error writing \"0 0 4294967295\\n\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted time=\"2022-03-04T09:47:57Z\" level=error msg=\"error writing \\\"0 0 4294967295\\\\n\\\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted\" time=\"2022-03-04T09:47:57Z\" level=error msg=\"(unable to determine exit status)\"", "securityContext: capabilities: add: [\"SETFCAP\"]", "apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: disable-working-directory-overwrite: false disable-home-env-overwrite: false", "STEP 7: RUN /usr/libexec/s2i/assemble /bin/sh: /usr/libexec/s2i/assemble: No such file or directory subprocess exited with status 127 subprocess exited with status 127 error building at STEP \"RUN /usr/libexec/s2i/assemble\": exit status 127 time=\"2021-11-04T13:05:26Z\" level=error msg=\"exit status 127\"", "Error from server (InternalError): Internal error occurred: failed calling webhook \"validation.webhook.pipeline.tekton.dev\": Post \"https://tekton-pipelines-webhook.openshift-pipelines.svc:443/resource-validation?timeout=10s\": service \"tekton-pipelines-webhook\" not found.", "oc get route -n <namespace>", "oc edit route -n <namespace> <el-route_name>", "spec: host: el-event-listener-q8c3w5-test-upgrade1.apps.ve49aws.aws.ospqa.com port: targetPort: 8000 to: kind: Service name: el-event-listener-q8c3w5 weight: 100 wildcardPolicy: None", "spec: host: el-event-listener-q8c3w5-test-upgrade1.apps.ve49aws.aws.ospqa.com port: targetPort: http-listener to: kind: Service name: el-event-listener-q8c3w5 weight: 100 wildcardPolicy: None", "pruner: resources: - pipelinerun - taskrun schedule: \"*/5 * * * *\" # cron schedule keep: 2 # delete all keeping n", "apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: \"true\" - name: pipelineTemplates value: \"true\"", "apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: profile: all targetNamespace: openshift-pipelines pipeline: params: - name: enableMetrics value: \"true\"", "tkn pipeline start build-and-deploy -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.15/01_pipeline/03_persistent_volume_claim.yaml -p deployment-name=pipelines-vote-api -p git-url=https://github.com/openshift/pipelines-vote-api.git -p IMAGE=image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/pipelines-vote-api --use-param-defaults", "- name: deploy params: - name: SCRIPT value: oc rollout status <deployment-name> runAfter: - build taskRef: kind: ClusterTask name: openshift-client", "steps: - name: git env: - name: HOME value: /root image: USD(params.BASE_IMAGE) workingDir: USD(workspaces.source.path)", "fsGroup: type: MustRunAs", "params: - name: github_json value: USD(body)", "annotations: triggers.tekton.dev/old-escape-quotes: \"true\"", "oc patch el/<eventlistener_name> -p '{\"metadata\":{\"finalizers\":[\"foregroundDeletion\"]}}' --type=merge", "oc patch el/github-listener-interceptor -p '{\"metadata\":{\"finalizers\":[\"foregroundDeletion\"]}}' --type=merge", "oc patch crd/eventlisteners.triggers.tekton.dev -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge", "Error executing command: fork/exec /bin/bash: exec format error", "skopeo inspect --raw <image_name>| jq '.manifests[] | select(.platform.architecture == \"<architecture>\") | .digest'", "useradd: /etc/passwd.8: lock file already used useradd: cannot lock /etc/passwd; try again later.", "oc login -u <login> -p <password> https://openshift.example.com:6443", "oc edit clustertask buildah", "command: ['buildah', 'bud', '--format=USD(params.FORMAT)', '--tls-verify=USD(params.TLSVERIFY)', '--layers', '-f', 'USD(params.DOCKERFILE)', '-t', 'USD(resources.outputs.image.url)', 'USD(params.CONTEXT)']", "command: ['buildah', '--storage-driver=overlay', 'bud', '--format=USD(params.FORMAT)', '--tls-verify=USD(params.TLSVERIFY)', '--no-cache', '-f', 'USD(params.DOCKERFILE)', '-t', 'USD(params.IMAGE)', 'USD(params.CONTEXT)']", "apiVersion: tekton.dev/v1 1 kind: Task 2 metadata: name: apply-manifests 3 spec: 4 workspaces: - name: source params: - name: manifest_dir description: The directory in source that contains yaml manifests type: string default: \"k8s\" steps: - name: apply image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest workingDir: /workspace/source command: [\"/bin/bash\", \"-c\"] args: - |- echo Applying manifests in USD(params.manifest_dir) directory oc apply -f USD(params.manifest_dir) echo -----------------------------------", "spec: pipeline: disable-working-directory-overwrite: false disable-home-env-overwrite: false", "apiVersion: tekton.dev/v1 kind: PipelineRun 1 metadata: generateName: guarded-pr- spec: taskRunTemplate: serviceAccountName: pipeline pipelineSpec: params: - name: path type: string description: The path of the file to be created workspaces: - name: source description: | This workspace is shared among all the pipeline tasks to read/write common resources tasks: - name: create-file 2 when: - input: \"USD(params.path)\" operator: in values: [\"README.md\"] workspaces: - name: source workspace: source taskSpec: workspaces: - name: source description: The workspace to create the readme file in steps: - name: write-new-stuff image: ubuntu script: 'touch USD(workspaces.source.path)/README.md' - name: check-file params: - name: path value: \"USD(params.path)\" workspaces: - name: source workspace: source runAfter: - create-file taskSpec: params: - name: path workspaces: - name: source description: The workspace to check for the file results: - name: exists description: indicates whether the file exists or is missing steps: - name: check-file image: alpine script: | if test -f USD(workspaces.source.path)/USD(params.path); then printf yes | tee /tekton/results/exists else printf no | tee /tekton/results/exists fi - name: echo-file-exists when: 3 - input: \"USD(tasks.check-file.results.exists)\" operator: in values: [\"yes\"] taskSpec: steps: - name: echo image: ubuntu script: 'echo file exists' - name: task-should-be-skipped-1 when: 4 - input: \"USD(params.path)\" operator: notin values: [\"README.md\"] taskSpec: steps: - name: echo image: ubuntu script: exit 1 finally: - name: finally-task-should-be-executed when: 5 - input: \"USD(tasks.echo-file-exists.status)\" operator: in values: [\"Succeeded\"] - input: \"USD(tasks.status)\" operator: in values: [\"Succeeded\"] - input: \"USD(tasks.check-file.results.exists)\" operator: in values: [\"yes\"] - input: \"USD(params.path)\" operator: in values: [\"README.md\"] taskSpec: steps: - name: echo image: ubuntu script: 'echo finally done' params: - name: path value: README.md workspaces: - name: source volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 16Mi", "apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: clone-cleanup-workspace 1 spec: workspaces: - name: git-source 2 tasks: - name: clone-app-repo 3 taskRef: name: git-clone-from-catalog params: - name: url value: https://github.com/tektoncd/community.git - name: subdirectory value: application workspaces: - name: output workspace: git-source finally: - name: cleanup 4 taskRef: 5 name: cleanup-workspace workspaces: 6 - name: source workspace: git-source - name: check-git-commit params: 7 - name: commit value: USD(tasks.clone-app-repo.results.commit) taskSpec: 8 params: - name: commit steps: - name: check-commit-initialized image: alpine script: | if [[ ! USD(params.commit) ]]; then exit 1 fi", "apiVersion: tekton.dev/v1 1 kind: TaskRun 2 metadata: name: apply-manifests-taskrun 3 spec: 4 taskRunTemplate: serviceAccountName: pipeline taskRef: 5 kind: Task name: apply-manifests workspaces: 6 - name: source persistentVolumeClaim: claimName: source-pvc", "apiVersion: tekton.dev/v1 1 kind: Pipeline 2 metadata: name: build-and-deploy 3 spec: 4 workspaces: 5 - name: shared-workspace params: 6 - name: deployment-name type: string description: name of the deployment to be patched - name: git-url type: string description: url of the git repo for the code of deployment - name: git-revision type: string description: revision to be used from repo of the code for deployment default: \"pipelines-1.15\" - name: IMAGE type: string description: image to be built from the code tasks: 7 - name: fetch-repository taskRef: resolver: cluster params: - name: kind value: task - name: name value: git-clone - name: namespace value: openshift-pipelines workspaces: - name: output workspace: shared-workspace params: - name: URL value: USD(params.git-url) - name: SUBDIRECTORY value: \"\" - name: DELETE_EXISTING value: \"true\" - name: REVISION value: USD(params.git-revision) - name: build-image 8 taskRef: resolver: cluster params: - name: kind value: task - name: name value: buildah - name: namespace value: openshift-pipelines workspaces: - name: source workspace: shared-workspace params: - name: TLSVERIFY value: \"false\" - name: IMAGE value: USD(params.IMAGE) runAfter: - fetch-repository - name: apply-manifests 9 taskRef: name: apply-manifests workspaces: - name: source workspace: shared-workspace runAfter: 10 - build-image - name: update-deployment taskRef: name: update-deployment workspaces: - name: source workspace: shared-workspace params: - name: deployment value: USD(params.deployment-name) - name: IMAGE value: USD(params.IMAGE) runAfter: - apply-manifests", "apiVersion: tekton.dev/v1 1 kind: PipelineRun 2 metadata: name: build-deploy-api-pipelinerun 3 spec: pipelineRef: name: build-and-deploy 4 params: 5 - name: deployment-name value: vote-api - name: git-url value: https://github.com/openshift-pipelines/vote-api.git - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/vote-api workspaces: 6 - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi", "apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: mypipelinerun spec: pipelineRef: name: mypipeline taskRunTemplate: podTemplate: securityContext: runAsNonRoot: true runAsUser: 1001", "apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1 kind: TaskRun metadata: name: mytaskrun namespace: default spec: taskRef: name: mytask podTemplate: schedulerName: volcano securityContext: runAsNonRoot: true runAsUser: 1001", "apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: build-and-deploy spec: workspaces: 1 - name: shared-workspace params: tasks: 2 - name: build-image taskRef: resolver: cluster params: - name: kind value: task - name: name value: buildah - name: namespace value: openshift-pipelines workspaces: 3 - name: source 4 workspace: shared-workspace 5 params: - name: TLSVERIFY value: \"false\" - name: IMAGE value: USD(params.IMAGE) runAfter: - fetch-repository - name: apply-manifests taskRef: name: apply-manifests workspaces: 6 - name: source workspace: shared-workspace runAfter: - build-image", "apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: build-deploy-api-pipelinerun spec: pipelineRef: name: build-and-deploy params: workspaces: 1 - name: shared-workspace 2 volumeClaimTemplate: 3 spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi", "apiVersion: triggers.tekton.dev/v1beta1 1 kind: TriggerBinding 2 metadata: name: vote-app 3 spec: params: 4 - name: git-repo-url value: USD(body.repository.url) - name: git-repo-name value: USD(body.repository.name) - name: git-revision value: USD(body.head_commit.id)", "apiVersion: triggers.tekton.dev/v1beta1 1 kind: TriggerTemplate 2 metadata: name: vote-app 3 spec: params: 4 - name: git-repo-url description: The git repository url - name: git-revision description: The git revision default: pipelines-1.15 - name: git-repo-name description: The name of the deployment to be created / patched resourcetemplates: 5 - apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: build-deploy-USD(tt.params.git-repo-name)-USD(uid) spec: taskRunTemplate: serviceAccountName: pipeline pipelineRef: name: build-and-deploy params: - name: deployment-name value: USD(tt.params.git-repo-name) - name: git-url value: USD(tt.params.git-repo-url) - name: git-revision value: USD(tt.params.git-revision) - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/USD(tt.params.git-repo-name) workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi", "apiVersion: triggers.tekton.dev/v1beta1 1 kind: Trigger 2 metadata: name: vote-trigger 3 spec: taskRunTemplate: serviceAccountName: pipeline 4 interceptors: - ref: name: \"github\" 5 params: 6 - name: \"secretRef\" value: secretName: github-secret secretKey: secretToken - name: \"eventTypes\" value: [\"push\"] bindings: - ref: vote-app 7 template: 8 ref: vote-app --- apiVersion: v1 kind: Secret 9 metadata: name: github-secret type: Opaque stringData: secretToken: \"1234567\"", "apiVersion: triggers.tekton.dev/v1beta1 1 kind: EventListener 2 metadata: name: vote-app 3 spec: taskRunTemplate: serviceAccountName: pipeline 4 triggers: - triggerRef: vote-trigger 5" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html-single/about_openshift_pipelines/index
Appendix B. Address Setting Configuration Elements
Appendix B. Address Setting Configuration Elements The table below lists all of the configuration elements of an address-setting . Note that some elements are marked DEPRECATED. Use the suggested replacement to avoid potential issues. Table B.1. Address setting elements Name Description address-full-policy Determines what happens when an address configured with a max-size-bytes becomes full. The available policies are: PAGE : messages sent to a full address will be paged to disk. DROP : messages sent to a full address will be silently dropped. FAIL : messages sent to a full address will be dropped and the message producers will receive an exception. BLOCK : message producers will block when they try and send any further messages. Note The BLOCK policy works only for the AMQP, OpenWire, and Core Protocol protocols because they feature flow control. auto-create-addresses Whether to automatically create addresses when a client sends a message to or attempts to consume a message from a queue mapped to an address that does not exist a queue. The default value is true . auto-create-dead-letter-resources Specifies whether the broker automatically creates a dead letter address and queue to receive undelivered messages. The default value is false . If the parameter is set to true , the broker automatically creates an <address> element that defines a dead letter address and an associated dead letter queue. The name of the automatically-created <address> element matches the name value that you specify for <dead-letter-address> . auto-create-jms-queues DEPRECATED: Use auto-create-queues instead. Determines whether this broker should automatically create a JMS queue corresponding to the address settings match when a JMS producer or a consumer tries to use such a queue. The default value is false . auto-create-jms-topics DEPRECATED: Use auto-create-queues instead. Determines whether this broker should automatically create a JMS topic corresponding to the address settings match when a JMS producer or a consumer tries to use such a queue. The default value is false . auto-create-queues Whether to automatically create a queue when a client sends a message to or attempts to consume a message from a queue. The default value is true . auto-delete-addresses Whether to delete auto-created addresses when the broker no longer has any queues. The default value is true . auto-delete-jms-queues DEPRECATED: Use auto-delete-queues instead. Determines whether AMQ Broker should automatically delete auto-created JMS queues when they have no consumers and no messages. The default value is false . auto-delete-jms-topics DEPRECATED: Use auto-delete-queues instead. Determines whether AMQ Broker should automatically delete auto-created JMS topics when they have no consumers and no messages. The default value is false . auto-delete-queues Whether to delete auto-created queues when the queue has no consumers and no messages. The default value is true . config-delete-addresses When the configuration file is reloaded, this setting specifies how to handle an address (and its queues) that has been deleted from the configuration file. You can specify the following values: OFF (default) The address is not deleted when the configuration file is reloaded. FORCE The address and its queues are deleted when the configuration file is reloaded. If there are any messages in the queues, they are removed also. config-delete-queues When the configuration file is reloaded, this setting specifies how to handle queues that have been deleted from the configuration file. You can specify the following values: OFF (default) The queue is not deleted when the configuration file is reloaded. FORCE The queue is deleted when the configuration file is reloaded. If there are any messages in the queue, they are removed also. dead-letter-address The address to which the broker sends dead messages. dead-letter-queue-prefix Prefix that the broker applies to the name of an automatically-created dead letter queue. The default value is DLQ. dead-letter-queue-suffix Suffix that the broker applies to an automatically-created dead letter queue. The default value is not defined (that is, the broker applies no suffix). default-address-routing-type The routing-type used on auto-created addresses. The default value is MULTICAST . default-max-consumers The maximum number of consumers allowed on this queue at any one time. The default value is 200 . default-purge-on-no-consumers Whether to purge the contents of the queue once there are no consumers. The default value is false . default-queue-routing-type The routing-type used on auto-created queues. The default value is MULTICAST . enable-metrics Specifies whether a configured metrics plugin such as the Prometheus plugin collects metrics for a matching address or set of addresses. The default value is true . expiry-address The address that will receive expired messages. expiry-delay Defines the expiration time in milliseconds that will be used for messages using the default expiration time. The default value is -1 , which is means no expiration time. last-value-queue Whether a queue uses only last values or not. The default value is false . management-browse-page-size How many messages a management resource can browse. The default value is 200 . max-delivery-attempts how many times to attempt to deliver a message before sending to dead letter address. The default is 10 . max-redelivery-delay Maximum value for the redelivery-delay, in milliseconds. max-size-bytes The maximum memory size for this address, specified in bytes. Used when the address-full-policy is PAGING , BLOCK , or FAIL , this value is specified in byte notation such as "K", "Mb", and "GB". The default value is -1 , which denotes infinite bytes. This parameter is used to protect broker memory by limiting the amount of memory consumed by a particular address space. This setting does not represent the total amount of bytes sent by the client that are currently stored in broker address space. It is an estimate of broker memory utilization. This value can vary depending on runtime conditions and certain workloads. It is recommended that you allocate the maximum amount of memory that can be afforded per address space. Under typical workloads, the broker requires approximately 150% to 200% of the payload size of the outstanding messages in memory. max-size-bytes-reject-threshold Used when the address-full-policy is BLOCK . The maximum size, in bytes, that an address can reach before the broker begins to reject messages. Works in combination with max-size-bytes for the AMQP protocol only. The default value is -1 , which means no limit. message-counter-history-day-limit How many days to keep a message counter history for this address. The default value is 0 . page-size-bytes The paging size in bytes. Also supports byte notation like K , Mb , and GB . The default value is 10485760 bytes, almost 10.5 MB. redelivery-delay The time, in milliseconds, to wait before redelivering a cancelled message. The default value is 0 . redelivery-delay-multiplier Multiplier to apply to the redelivery-delay parameter. The default value is 1.0 . redistribution-delay Defines how long to wait in milliseconds after the last consumer is closed on a queue before redistributing any messages. The default value is -1 . send-to-dla-on-no-route When set to true , a message will be sent to the configured dead letter address if it cannot be routed to any queues. The default value is false . slow-consumer-check-period How often to check, in seconds, for slow consumers. The default value is 5 . slow-consumer-policy Determines what happens when a slow consumer is identified. Valid options are KILL or NOTIFY . KILL kills the consumer's connection, which impacts any client threads using that same connection. NOTIFY sends a CONSUMER_SLOW management notification to the client. The default value is NOTIFY . slow-consumer-threshold The minimum rate of message consumption allowed before a consumer is considered slow. Measured in messages-per-second. The default value is -1 , which is unbounded.
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/configuring_amq_broker/address_setting_attributes
1.4. Port Information
1.4. Port Information Red Hat Gluster Storage Server uses the listed ports. Ensure that firewall settings do not prevent access to these ports. Firewall configuration tools differ between Red Hat Entperise Linux 6 and Red Hat Enterprise Linux 7. For Red Hat Enterprise Linux 6, use the iptables command to open a port: For Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8, if default ports are not already in use by other services, it is usually simpler to add a service rather than open a port: However, if the default ports are already in use, you can open a specific port with the following command: For example: Table 1.1. Open the following ports on all storage servers Connection source TCP Ports UDP Ports Recommended for Used for Any authorized network entity with a valid SSH key 22 - All configurations Remote backup using geo-replication Any authorized network entity; be cautious not to clash with other RPC services. 111 111 All configurations RPC port mapper and RPC bind Any authorized SMB/CIFS client 139 and 445 137 and 138 Sharing storage using SMB/CIFS SMB/CIFS protocol Any authorized NFS clients 2049 2049 Sharing storage using Gluster NFS (Deprecated) or NFS-Ganesha Exports using NFS protocol All servers in the Samba-CTDB cluster 4379 - Sharing storage using SMB and Gluster NFS (Deprecated) CTDB Any authorized network entity 24007 - All configurations Management processes using glusterd Any authorized network entity 24009 - All configurations Gluster events daemon NFSv3 clients 662 662 Sharing storage using NFS-Ganesha and Gluster NFS (Deprecated) statd NFSv3 clients 32803 32803 Sharing storage using NFS-Ganesha and Gluster NFS (Deprecated) NLM protocol NFSv3 clients sending mount requests - 32769 Sharing storage using Gluster NFS (Deprecated) Gluster NFS MOUNT protocol NFSv3 clients sending mount requests 20048 20048 Sharing storage using NFS-Ganesha NFS-Ganesha MOUNT protocol NFS clients 875 875 Sharing storage using NFS-Ganesha NFS-Ganesha RQUOTA protocol (fetching quota information) Servers in pacemaker/corosync cluster 2224 - Sharing storage using NFS-Ganesha pcsd Servers in pacemaker/corosync cluster 3121 - Sharing storage using NFS-Ganesha pacemaker_remote Servers in pacemaker/corosync cluster - 5404 and 5405 Sharing storage using NFS-Ganesha corosync Servers in pacemaker/corosync cluster 21064 - Sharing storage using NFS-Ganesha dlm Any authorized network entity 49152 - 49664 - All configurations Brick communication ports. The total number of ports required depends on the number of bricks on the node. One port is required for each brick on the machine. Table 1.2. Open the following ports on NFS-Ganesha and Gluster NFS (Deprecated) storage clients Connection source TCP Ports UDP Ports Recommended for Used for NFSv3 servers 662 662 Sharing storage using NFS-Ganesha and Gluster NFS (Deprecated) statd NFSv3 servers 32803 32803 Sharing storage using NFS-Ganesha and Gluster NFS (Deprecated) NLM protocol
[ "iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 5667 -j ACCEPT service iptables save", "firewall-cmd --zone= zone_name --add-service=glusterfs firewall-cmd --zone= zone_name --add-service=glusterfs --permanent", "firewall-cmd --zone= zone_name --add-port= port / protocol firewall-cmd --zone= zone_name --add-port= port / protocol --permanent", "firewall-cmd --zone=public --add-port=5667/tcp firewall-cmd --zone=public --add-port=5667/tcp --permanent" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/installation_guide/Port_Information
Appendix A. Using your Red Hat subscription
Appendix A. Using your Red Hat subscription Red Hat Connectivity Link is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Managing your subscriptions Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. In the menu bar, click Subscriptions to view and manage your subscriptions. Revised on 2025-03-12 11:42:29 UTC
null
https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/introduction_to_connectivity_link/using_your_subscription
6.2. Creating Guests with virt-install
6.2. Creating Guests with virt-install You can use the virt-install command to create guest virtual machines from the command line. virt-install is used either interactively or as part of a script to automate the creation of virtual machines. Using virt-install with Kickstart files allows for unattended installation of virtual machines. The virt-install tool provides a number of options that can be passed on the command line. To see a complete list of options run the following command: Note that you need root privileges in order for virt-install commands to complete successfully. The virt-install man page also documents each command option and important variables. qemu-img is a related command which may be used before virt-install to configure storage options. An important option is the --graphics option which allows graphical installation of a virtual machine. Example 6.1. Using virt-install to install a Red Hat Enterprise Linux 5 guest virtual machine This example creates a Red Hat Enterprise Linux 5 guest: Ensure that you select the correct os-type for your operating system when running this command. Refer to man virt-install for more examples. Note When installing a Windows guest with virt-install , the --os-type= windows option is recommended. This option prevents the CD-ROM from disconnecting when rebooting during the installation procedure. The --os-variant option further optimizes the configuration for a specific guest operating system. After the installation completes, you can connect to the guest operating system. For more information, see Section 6.5, "Connecting to Virtual Machines"
[ "virt-install --help", "virt-install --name=guest1-rhel5-64 --file=/var/lib/libvirt/images/guest1-rhel5-64.dsk --file-size=8 --nonsparse --graphics spice --vcpus=2 --ram=2048 --location=http://example1.com/installation_tree/RHEL5.6-Server-x86_64/os --network bridge=br0 --os-type=linux --os-variant=rhel5.4" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/sect-virtualization_host_configuration_and_guest_installation_guide-guest_installation-creating_guests_with_virt_install
Chapter 26. Introducing distributed tracing
Chapter 26. Introducing distributed tracing Distributed tracing allows you to track the progress of transactions between applications in a distributed system. In a microservices architecture, tracing tracks the progress of transactions between services. Trace data is useful for monitoring application performance and investigating issues with target systems and end-user applications. In Streams for Apache Kafka, distributed tracing facilitates end-to-end tracking of messages: from source systems to Kafka, and then from Kafka to target systems and applications. This complements the monitoring of metrics in Grafana dashboards and component loggers. Streams for Apache Kafka provides built-in support for tracing for the following Kafka components: MirrorMaker to trace messages from a source cluster to a target cluster Kafka Connect to trace messages consumed and produced by Kafka Connect Kafka Bridge to trace messages between Kafka and HTTP client applications Tracing is not supported for Kafka brokers. 26.1. Tracing options Distributed traces consist of spans, which represent individual units of work performed over a specific time period. When instrumented with tracers, applications generate traces that follow requests as they move through the system, making it easier to identify delays or issues. OpenTelemetry, a telemetry framework, provides APIs for tracing that are independent of any specific backend tracing system. In Streams for Apache Kafka, the default protocol for transmitting traces between Kafka components and tracing systems is OpenTelemetry's OTLP (OpenTelemetry Protocol), a vendor-neutral protocol. While OTLP is the default, Streams for Apache Kafka also supports other tracing systems, such as Jaeger. Jaeger is a distributed tracing system designed for monitoring microservices, and its user interface allows you to query, filter, and analyze trace data in detail. The Jaeger user interface showing a simple query Additional resources Jaeger documentation OpenTelemetry documentation 26.2. Environment variables for tracing Use environment variables to enable tracing for Kafka components or to initialize a tracer for Kafka clients. Tracing environment variables are subject to change. For the latest information, see the OpenTelemetry documentation . The following table describes the key environment variables for setting up tracing with OpenTelemetry. Table 26.1. OpenTelemetry environment variables Property Required Description OTEL_SERVICE_NAME Yes The name of the tracing service for OpenTelemetry, such as OTLP or Jaeger. OTEL_EXPORTER_OTLP_ENDPOINT Yes (if using OTLP exporter) The OTLP endpoint for exporting trace data to the tracing system. For Jaeger tracing, specify the OTEL_EXPORTER_JAEGER_ENDPOINT . For other tracing systems, specify the appropriate endpoint . OTEL_TRACES_EXPORTER No (unless using a non-OTLP exporter) The exporter used for tracing. The default is otlp , which does not need to be specified. For Jaeger tracing, set this variable to jaeger . For other tracing systems, specify the appropriate exporter . OTEL_EXPORTER_OTLP_CERTIFICATE No (required if using TLS with OTLP) The path to the file containing trusted certificates for TLS authentication. Required to secure communication between Kafka components and the OpenTelemetry endpoint when using TLS with the otlp exporter. 26.3. Setting up distributed tracing Enable distributed tracing in Kafka components by specifying a tracing type in the custom resource. Instrument tracers in Kafka clients for end-to-end tracking of messages. To set up distributed tracing, follow these procedures in order: Enable tracing for supported Kafka components Initialize a tracer for Kafka clients Instrument clients with tracers, embedding telemetry-gathering functionality into the code: Instrument producers and consumers for tracing Instrument Kafka Streams applications for tracing 26.3.1. Prerequisites Before setting up distributed tracing, make sure backend components are deployed to your OpenShift cluster. We recommend using the Jaeger operator for deploying Jaeger on your OpenShift cluster. For deployment instructions, see the Jaeger documentation . Note Setting up tracing systems is outside the scope of this content. 26.3.2. Enabling tracing in supported Kafka components Distributed tracing is supported for MirrorMaker, MirrorMaker 2, Kafka Connect, and the Kafka Bridge. Enable tracing using OpenTelemetry by setting the spec.tracing.type property to opentelemetry . Configure the custom resource of the component to specify and enable a tracing system using spec.template properties. By default, OpenTelemetry uses the OTLP (OpenTelemetry Protocol) exporter and endpoint to gather trace data. This procedure shows the configuration to use OTLP as the tracing system. If you prefer to use a different tracing system supported by OpenTelemetry, such as Jaeger, you can modify the exporter and endpoint settings in the tracing configuration. Caution Streams for Apache Kafka no longer supports OpenTracing. If you were previously using OpenTracing with the type: jaeger option, we encourage you to transition to using OpenTelemetry instead. Enabling tracing in a resource triggers the following events: Interceptor classes are updated in the integrated consumers and producers of the component. For MirrorMaker, MirrorMaker 2, and Kafka Connect, the tracing agent initializes a tracer based on the tracing configuration defined in the resource. For the Kafka Bridge, a tracer based on the tracing configuration defined in the resource is initialized by the Kafka Bridge itself. Tracing in MirrorMaker and MirrorMaker 2 For MirrorMaker and MirrorMaker 2, messages are traced from the source cluster to the target cluster. The trace data records messages entering and leaving the MirrorMaker or MirrorMaker 2 component. Tracing in Kafka Connect For Kafka Connect, only messages produced and consumed by Kafka Connect are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. Tracing in the Kafka Bridge For the Kafka Bridge, messages produced and consumed by the Kafka Bridge are traced. Incoming HTTP requests from client applications to send and receive messages through the Kafka Bridge are also traced. To have end-to-end tracing, you must configure tracing in your HTTP clients. Procedure Perform these steps for each KafkaMirrorMaker , KafkaMirrorMaker2 , KafkaConnect , and KafkaBridge resource. In the spec.template property, configure the tracer service. Use the tracing environment variables as template configuration properties. For OpenTelemetry, set the spec.tracing.type property to opentelemetry . Example tracing configuration for Kafka Connect using OpenTelemetry apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry #... Example tracing configuration for MirrorMaker using OpenTelemetry apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: #... template: mirrorMakerContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry #... Example tracing configuration for MirrorMaker 2 using OpenTelemetry apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: #... template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry #... Example tracing configuration for the Kafka Bridge using OpenTelemetry apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: #... template: bridgeContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry #... (Optional) If TLS authentication is configured on the OTLP endpoint, use the OTEL_EXPORTER_OTLP_CERTIFICATE environment variable to specify the path to a trusted certificate This secures communication between Kafka components and the OpenTelemetry endpoint. To provide the certificate, mount a volume containing the secret that holds the trusted certificate. Unless the endpoint address is redirected from http , use https . Example configuration for TLS apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "https://otlp-host:4317" - name: OTEL_EXPORTER_OTLP_CERTIFICATE value: "/mnt/mysecret/my-certificate.crt" volumeMounts: - name: tracing-secret-volume mountPath: /mnt/mysecret pod: volumes: - name: tracing-secret-volume secret: secretName: mysecret tracing: type: opentelemetry #... Apply the changes to the custom resource configuration. 26.3.3. Initializing tracing for Kafka clients Initialize a tracer for OpenTelemetry, then instrument your client applications for distributed tracing. You can instrument Kafka producer and consumer clients, and Kafka Streams API applications. Configure and initialize a tracer using a set of tracing environment variables . Procedure In each client application add the dependencies for the tracer: Add the Maven dependencies to the pom.xml file for the client application: Dependencies for OpenTelemetry <dependency> <groupId>io.opentelemetry.semconv</groupId> <artifactId>opentelemetry-semconv</artifactId> <version>1.21.0-alpha-redhat-00001</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.34.1</version> <exclusions> <exclusion> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-okhttp</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-grpc-managed-channel</artifactId> <version>1.34.1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-2.6</artifactId> <version>1.32.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-jdk</artifactId> <version>1.34.1-alpha</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.grpc</groupId> <artifactId>grpc-netty-shaded</artifactId> <version>1.61.0</version> </dependency> Define the configuration of the tracer using the tracing environment variables . Create a tracer, which is initialized with the environment variables: Creating a tracer for OpenTelemetry OpenTelemetry ot = GlobalOpenTelemetry.get(); Register the tracer as a global tracer: GlobalTracer.register(tracer); Instrument your client: Section 26.3.4, "Instrumenting producers and consumers for tracing" Section 26.3.5, "Instrumenting Kafka Streams applications for tracing" 26.3.4. Instrumenting producers and consumers for tracing Instrument application code to enable tracing in Kafka producers and consumers. Use a decorator pattern or interceptors to instrument your Java producer and consumer application code for tracing. You can then record traces when messages are produced or retrieved from a topic. OpenTelemetry instrumentation project provides classes that support instrumentation of producers and consumers. Decorator instrumentation For decorator instrumentation, create a modified producer or consumer instance for tracing. Interceptor instrumentation For interceptor instrumentation, add the tracing capability to the consumer or producer configuration. Prerequisites You have initialized tracing for the client . You enable instrumentation in producer and consumer applications by adding the tracing JARs as dependencies to your project. Procedure Perform these steps in the application code of each producer and consumer application. Instrument your client application code using either a decorator pattern or interceptors. To use a decorator pattern, create a modified producer or consumer instance to send or receive messages. You pass the original KafkaProducer or KafkaConsumer class. Example decorator instrumentation for OpenTelemetry // Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton("mytopic")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); To use interceptors, set the interceptor class in the producer or consumer configuration. You use the KafkaProducer and KafkaConsumer classes in the usual way. The TracingProducerInterceptor and TracingConsumerInterceptor interceptor classes take care of the tracing capability. Example producer configuration using interceptors senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...); Example consumer configuration using interceptors consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList("messages")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); 26.3.5. Instrumenting Kafka Streams applications for tracing Instrument application code to enable tracing in Kafka Streams API applications. Use a decorator pattern or interceptors to instrument your Kafka Streams API applications for tracing. You can then record traces when messages are produced or retrieved from a topic. Decorator instrumentation For decorator instrumentation, create a modified Kafka Streams instance for tracing. For OpenTelemetry, you need to create a custom TracingKafkaClientSupplier class to provide tracing instrumentation for Kafka Streams. Interceptor instrumentation For interceptor instrumentation, add the tracing capability to the Kafka Streams producer and consumer configuration. Prerequisites You have initialized tracing for the client . You enable instrumentation in Kafka Streams applications by adding the tracing JARs as dependencies to your project. To instrument Kafka Streams with OpenTelemetry, you'll need to write a custom TracingKafkaClientSupplier . The custom TracingKafkaClientSupplier can extend Kafka's DefaultKafkaClientSupplier , overriding the producer and consumer creation methods to wrap the instances with the telemetry-related code. Example custom TracingKafkaClientSupplier private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } } Procedure Perform these steps for each Kafka Streams API application. To use a decorator pattern, create an instance of the TracingKafkaClientSupplier supplier interface, then provide the supplier interface to KafkaStreams . Example decorator instrumentation KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start(); To use interceptors, set the interceptor class in the Kafka Streams producer and consumer configuration. The TracingProducerInterceptor and TracingConsumerInterceptor interceptor classes take care of the tracing capability. Example producer and consumer configuration using interceptors props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); 26.3.6. Introducing a different OpenTelemetry tracing system Instead of the default OTLP system, you can specify other tracing systems that are supported by OpenTelemetry. You do this by adding the required artifacts to the Kafka image provided with Streams for Apache Kafka. Any required implementation specific environment variables must also be set. You then enable the new tracing implementation using the OTEL_TRACES_EXPORTER environment variable. This procedure shows how to implement Zipkin tracing. Procedure Add the tracing artifacts to the /opt/kafka/libs/ directory of the Kafka image. You can use the Kafka container image on the Red Hat Ecosystem Catalog as a base image for creating a new custom image. OpenTelemetry artifact for Zipkin io.opentelemetry:opentelemetry-exporter-zipkin Set the tracing exporter and endpoint for the new tracing implementation. Example Zikpin tracer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: #... template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-zipkin-service - name: OTEL_EXPORTER_ZIPKIN_ENDPOINT value: http://zipkin-exporter-host-name:9411/api/v2/spans 1 - name: OTEL_TRACES_EXPORTER value: zipkin 2 tracing: type: opentelemetry #... 1 Specifies the Zipkin endpoint to connect to. 2 The Zipkin exporter. 26.3.7. Specifying custom span names for OpenTelemetry A tracing span is a logical unit of work in Jaeger, with an operation name, start time, and duration. Spans have built-in names, but you can specify custom span names in your Kafka client instrumentation where used. Specifying custom span names is optional and only applies when using a decorator pattern in producer and consumer client instrumentation or Kafka Streams instrumentation . Custom span names cannot be specified directly with OpenTelemetry. Instead, you retrieve span names by adding code to your client application to extract additional tags and attributes. Example code to extract attributes //Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey("prod_start"), "prod1"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey("prod_end"), "prod2"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey("con_start"), "con1"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey("con_end"), "con2"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092")); System.setProperty("otel.traces.exporter", "jaeger"); System.setProperty("otel.service.name", "myapp1"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build();
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # template: mirrorMakerContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # template: bridgeContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"https://otlp-host:4317\" - name: OTEL_EXPORTER_OTLP_CERTIFICATE value: \"/mnt/mysecret/my-certificate.crt\" volumeMounts: - name: tracing-secret-volume mountPath: /mnt/mysecret pod: volumes: - name: tracing-secret-volume secret: secretName: mysecret tracing: type: opentelemetry #", "<dependency> <groupId>io.opentelemetry.semconv</groupId> <artifactId>opentelemetry-semconv</artifactId> <version>1.21.0-alpha-redhat-00001</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.34.1</version> <exclusions> <exclusion> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-okhttp</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-grpc-managed-channel</artifactId> <version>1.34.1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-2.6</artifactId> <version>1.32.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-jdk</artifactId> <version>1.34.1-alpha</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.grpc</groupId> <artifactId>grpc-netty-shaded</artifactId> <version>1.61.0</version> </dependency>", "OpenTelemetry ot = GlobalOpenTelemetry.get();", "GlobalTracer.register(tracer);", "// Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton(\"mytopic\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);", "senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...);", "consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList(\"messages\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);", "private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } }", "KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();", "props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName());", "io.opentelemetry:opentelemetry-exporter-zipkin", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-zipkin-service - name: OTEL_EXPORTER_ZIPKIN_ENDPOINT value: http://zipkin-exporter-host-name:9411/api/v2/spans 1 - name: OTEL_TRACES_EXPORTER value: zipkin 2 tracing: type: opentelemetry #", "//Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"prod_start\"), \"prod1\"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"prod_end\"), \"prod2\"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"con_start\"), \"con1\"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"con_end\"), \"con2\"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, \"localhost:9092\")); System.setProperty(\"otel.traces.exporter\", \"jaeger\"); System.setProperty(\"otel.service.name\", \"myapp1\"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build();" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/assembly-distributed-tracing-str
probe::tcp.setsockopt
probe::tcp.setsockopt Name probe::tcp.setsockopt - Call to setsockopt Synopsis Values optstr Resolves optname to a human-readable format level The level at which the socket options will be manipulated optlen Used to access values for setsockopt name Name of this probe optname TCP socket options (e.g. TCP_NODELAY, TCP_MAXSEG, etc) sock Network socket Context The process which calls setsockopt
[ "tcp.setsockopt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-tcp-setsockopt